Edit model card

Emotion-Image-Classification-V3

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5048
  • Accuracy: 0.6375

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 20 1.4141 0.5437
No log 2.0 40 1.6711 0.4375
No log 3.0 60 1.3988 0.6
No log 4.0 80 1.5072 0.5625
No log 5.0 100 1.3970 0.6125
No log 6.0 120 1.3488 0.625
No log 7.0 140 1.4599 0.5437
No log 8.0 160 1.4678 0.5813
No log 9.0 180 1.6072 0.5375
No log 10.0 200 1.2243 0.6312
No log 11.0 220 1.2860 0.5875
No log 12.0 240 1.2472 0.5875
No log 13.0 260 1.3423 0.5875
No log 14.0 280 1.3879 0.5875
No log 15.0 300 1.4201 0.575
No log 16.0 320 1.5388 0.5312
No log 17.0 340 1.5433 0.55
No log 18.0 360 1.3812 0.5875
No log 19.0 380 1.4629 0.5938
No log 20.0 400 1.5240 0.525
No log 21.0 420 1.4818 0.5437
No log 22.0 440 1.4461 0.5687
No log 23.0 460 1.3944 0.5875
No log 24.0 480 1.6598 0.55
0.1882 25.0 500 1.4268 0.6188
0.1882 26.0 520 1.6246 0.5563
0.1882 27.0 540 1.3836 0.6125
0.1882 28.0 560 1.7652 0.4813
0.1882 29.0 580 1.4360 0.5625
0.1882 30.0 600 1.5103 0.55
0.1882 31.0 620 1.4546 0.5563
0.1882 32.0 640 1.4085 0.575
0.1882 33.0 660 1.4729 0.6062
0.1882 34.0 680 1.7415 0.5375
0.1882 35.0 700 1.7349 0.5375
0.1882 36.0 720 1.6331 0.5687
0.1882 37.0 740 1.5159 0.6062
0.1882 38.0 760 1.5464 0.5875
0.1882 39.0 780 1.5402 0.5938
0.1882 40.0 800 1.5403 0.6
0.1882 41.0 820 1.4509 0.65
0.1882 42.0 840 1.7641 0.5437
0.1882 43.0 860 1.5503 0.5813
0.1882 44.0 880 1.6178 0.5687
0.1882 45.0 900 1.5877 0.6062
0.1882 46.0 920 1.7210 0.55
0.1882 47.0 940 1.5960 0.6188
0.1882 48.0 960 1.7922 0.55
0.1882 49.0 980 2.0035 0.525
0.1299 50.0 1000 1.8269 0.5062
0.1299 51.0 1020 1.6933 0.5687
0.1299 52.0 1040 1.7252 0.5312
0.1299 53.0 1060 1.6312 0.6
0.1299 54.0 1080 1.8208 0.5375
0.1299 55.0 1100 1.7589 0.575
0.1299 56.0 1120 1.7185 0.5875
0.1299 57.0 1140 1.7227 0.5437
0.1299 58.0 1160 1.8849 0.5188
0.1299 59.0 1180 1.7565 0.5687
0.1299 60.0 1200 1.6048 0.6062
0.1299 61.0 1220 1.5088 0.6125
0.1299 62.0 1240 1.6270 0.5687
0.1299 63.0 1260 1.5913 0.625
0.1299 64.0 1280 1.7789 0.5625
0.1299 65.0 1300 1.7923 0.55
0.1299 66.0 1320 1.9365 0.575
0.1299 67.0 1340 1.7365 0.5938
0.1299 68.0 1360 1.8584 0.55
0.1299 69.0 1380 1.9811 0.5062
0.1299 70.0 1400 1.9433 0.55
0.1299 71.0 1420 1.7644 0.575
0.1299 72.0 1440 1.7661 0.6
0.1299 73.0 1460 1.8884 0.5687
0.1299 74.0 1480 1.7504 0.5813
0.0774 75.0 1500 1.9648 0.5687
0.0774 76.0 1520 1.8968 0.5437
0.0774 77.0 1540 1.7752 0.5875
0.0774 78.0 1560 1.7504 0.625
0.0774 79.0 1580 1.7458 0.6
0.0774 80.0 1600 1.8044 0.5938
0.0774 81.0 1620 1.6748 0.5813
0.0774 82.0 1640 1.7661 0.575
0.0774 83.0 1660 1.8534 0.575
0.0774 84.0 1680 1.7733 0.6125
0.0774 85.0 1700 1.7857 0.575
0.0774 86.0 1720 1.7397 0.6
0.0774 87.0 1740 1.7496 0.5813
0.0774 88.0 1760 1.8774 0.5813
0.0774 89.0 1780 1.6830 0.5938
0.0774 90.0 1800 1.9231 0.5563
0.0774 91.0 1820 1.8051 0.5875
0.0774 92.0 1840 1.8424 0.5938
0.0774 93.0 1860 1.8644 0.575
0.0774 94.0 1880 1.8415 0.5687
0.0774 95.0 1900 1.8917 0.55
0.0774 96.0 1920 1.8964 0.5625
0.0774 97.0 1940 1.6416 0.5875
0.0774 98.0 1960 1.7067 0.625
0.0774 99.0 1980 1.7533 0.5938
0.0569 100.0 2000 1.8181 0.5563

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.3.0
  • Datasets 2.15.0
  • Tokenizers 0.15.1
Downloads last month
9
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for jhoppanne/Emotion-Image-Classification-V3

Finetuned
(1728)
this model

Evaluation results