carlosleao's picture
facial-expression-recognition-ferplus
fdc1f3b verified
metadata
library_name: transformers
base_model: carlosleao/vit-Facial-Expression-Recognition
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vit-Facial-Expression-Recognition
    results: []

vit-Facial-Expression-Recognition

This model is a fine-tuned version of carlosleao/vit-Facial-Expression-Recognition on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2687
  • Accuracy: 0.4177

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.9372 0.8959 100 1.5720 0.4417
0.9147 1.7917 200 1.6084 0.4364
0.8393 2.6876 300 1.7268 0.4169
0.7882 3.5834 400 1.7604 0.4227
0.6916 4.4793 500 1.8619 0.4124
0.6367 5.3751 600 1.9493 0.4261
0.5848 6.2710 700 2.0511 0.4046
0.5183 7.1669 800 2.1316 0.4230
0.4788 8.0627 900 2.2210 0.4026
0.4586 8.9586 1000 2.2687 0.4177
0.4079 9.8544 1100 2.4038 0.3747
0.3797 10.7503 1200 2.3664 0.4046
0.2957 11.6461 1300 2.4534 0.4068
0.2622 12.5420 1400 2.5413 0.3956
0.2202 13.4378 1500 2.5601 0.4127
0.2112 14.3337 1600 2.6560 0.3920
0.1769 15.2296 1700 2.8006 0.3909
0.161 16.1254 1800 2.8011 0.3928
0.155 17.0213 1900 2.9518 0.3856
0.1309 17.9171 2000 2.9363 0.3727
0.1001 18.8130 2100 2.9187 0.3998
0.0816 19.7088 2200 3.0563 0.3842
0.0672 20.6047 2300 2.9358 0.4205
0.0567 21.5006 2400 3.1118 0.3970
0.0524 22.3964 2500 3.2147 0.4054
0.0413 23.2923 2600 3.1928 0.3951
0.0368 24.1881 2700 3.1599 0.4141
0.0275 25.0840 2800 3.1720 0.4166
0.029 25.9798 2900 3.1924 0.4012
0.0231 26.8757 3000 3.2031 0.4088
0.0226 27.7716 3100 3.2125 0.4113
0.0205 28.6674 3200 3.2122 0.4118
0.0197 29.5633 3300 3.2126 0.4116

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1