motheecreator commited on
Commit
44a6973
1 Parent(s): d712510

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -4
README.md CHANGED
@@ -28,16 +28,42 @@ model-index:
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
- # vit-base-patch16-224-in21k-finetuned
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the FER 2013 and MMI datasets.
34
- It achieves the following results on the evaluation set:
35
  - Loss: 0.4353
36
  - Accuracy: 0.8571
37
 
38
  ## Model description
39
 
40
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Intended uses & limitations
43
 
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ # Vision Transformer (ViT) for Facial Expression Recognition Model Card
32
+
33
+ ## Model Overview
34
+
35
+ - **Model Name:** [motheecreator/vit-Facial-Expression-Recognition](https://huggingface.co/motheecreator/vit-Facial-Expression-Recognition)
36
+
37
+ - **Task:** Facial Expression/Emotion Recognition
38
+
39
+ - **Datasets:** [FER2013](https://www.kaggle.com/datasets/msambare/fer2013), [MMI Facial Expression Database](https://mmifacedb.eu)
40
+
41
+ - **Model Architecture:** [Vision Transformer (ViT)](https://huggingface.co/docs/transformers/model_doc/vit)
42
+
43
+ - **Finetuned from model:** [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
44
 
 
 
45
  - Loss: 0.4353
46
  - Accuracy: 0.8571
47
 
48
  ## Model description
49
 
50
+ The vit-face-expression model is a Vision Transformer fine-tuned for the task of facial emotion recognition.
51
+
52
+ It is trained on the FER2013 and MMI facial Expression datasets , which consists of facial images categorized into seven different emotions:
53
+ - Angry
54
+ - Disgust
55
+ - Fear
56
+ - Happy
57
+ - Sad
58
+ - Surprise
59
+ - Neutral
60
+
61
+ ## Data Preprocessing
62
+
63
+ The input images are preprocessed before being fed into the model. The preprocessing steps include:
64
+ - **Resizing:** Images are resized to the specified input size.
65
+ - **Normalization:** Pixel values are normalized to a specific range.
66
+ - **Data Augmentation:** Random transformations such as rotations, flips, and zooms are applied to augment the training dataset.
67
 
68
  ## Intended uses & limitations
69