ElenaRyumina commited on
Commit
e6bd59e
1 Parent(s): 9e58e71
Files changed (1) hide show
  1. README.md +34 -1
README.md CHANGED
@@ -3,4 +3,37 @@ license: mit
3
  metrics:
4
  - recall
5
  pipeline_tag: video-classification
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  metrics:
4
  - recall
5
  pipeline_tag: video-classification
6
+ ---
7
+
8
+ # Static and dynamic facial emotion recognition using the Emo-AffectNet model
9
+
10
+ [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in-search-of-a-robust-facial-expressions/facial-expression-recognition-on-affectnet)](https://paperswithcode.com/paper/in-search-of-a-robust-facial-expressions)
11
+
12
+ <p align="center">
13
+ <img width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_04_AffWild2.gif?raw=true" alt="test_4_AffWild2"/>
14
+ <img width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_02_AffWild2.gif?raw=true" alt="test_2_AffWild2"/>
15
+ <img width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_03_AffWild2.gif?raw=true" alt="test_3_AffWild2"/>
16
+ </p>
17
+
18
+ This is Emo-AffectNet model for facial emotion recognition by videos / images.
19
+
20
+ To see the emotion detected by webcam, you should run ``rub_webcam``. Webcam result:
21
+
22
+ <p align="center">
23
+ <img width="50%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/result_2.gif?raw=true" alt="result"/>
24
+ </p>
25
+
26
+ For more information see [GitHub](https://github.com/ElenaRyumina/EMO-AffectNetModel).
27
+
28
+ ### Citation
29
+
30
+ If you are using EMO-AffectNet model in your research, please consider to cite research [paper](https://www.sciencedirect.com/science/article/pii/S0925231222012656). Here is an example of BibTeX entry:
31
+
32
+ <div class="highlight highlight-text-bibtex notranslate position-relative overflow-auto" dir="auto"><pre><span class="pl-k">@article</span>{<span class="pl-en">RYUMINA2022</span>,
33
+ <span class="pl-s">title</span> = <span class="pl-s"><span class="pl-pds">{</span>In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study<span class="pl-pds">}</span></span>,
34
+ <span class="pl-s">author</span> = <span class="pl-s"><span class="pl-pds">{</span>Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov<span class="pl-pds">}</span></span>,
35
+ <span class="pl-s">journal</span> = <span class="pl-s"><span class="pl-pds">{</span>Neurocomputing<span class="pl-pds">}</span></span>,
36
+ <span class="pl-s">year</span> = <span class="pl-s"><span class="pl-pds">{</span>2022<span class="pl-pds">}</span></span>,
37
+ <span class="pl-s">doi</span> = <span class="pl-s"><span class="pl-pds">{</span>10.1016/j.neucom.2022.10.013<span class="pl-pds">}</span></span>,
38
+ <span class="pl-s">url</span> = <span class="pl-s"><span class="pl-pds">{</span>https://www.sciencedirect.com/science/article/pii/S0925231222012656<span class="pl-pds">}</span></span>,
39
+ }</div>