File size: 2,715 Bytes
570b648
 
35109a3
 
 
e6bd59e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: mit
metrics:
- recall
pipeline_tag: video-classification
---

# Static and dynamic facial emotion recognition using the Emo-AffectNet model

[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/in-search-of-a-robust-facial-expressions/facial-expression-recognition-on-affectnet)](https://paperswithcode.com/paper/in-search-of-a-robust-facial-expressions)

<p align="center">
    <img width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_04_AffWild2.gif?raw=true" alt="test_4_AffWild2"/>
    <img  width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_02_AffWild2.gif?raw=true" alt="test_2_AffWild2"/>
    <img  width="32%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/test_03_AffWild2.gif?raw=true" alt="test_3_AffWild2"/>
</p>

This is Emo-AffectNet model for facial emotion recognition by videos / images. 

To see the emotion detected by webcam, you should run ``rub_webcam``. Webcam result:

<p align="center">
    <img width="50%" src="https://github.com/ElenaRyumina/EMO-AffectNetModel/blob/main/gif/result_2.gif?raw=true" alt="result"/>
</p>

For more information see [GitHub](https://github.com/ElenaRyumina/EMO-AffectNetModel).

### Citation

If you are using EMO-AffectNet model in your research, please consider to cite research [paper](https://www.sciencedirect.com/science/article/pii/S0925231222012656). Here is an example of BibTeX entry:

<div class="highlight highlight-text-bibtex notranslate position-relative overflow-auto" dir="auto"><pre><span class="pl-k">@article</span>{<span class="pl-en">RYUMINA2022</span>,
  <span class="pl-s">title</span>        = <span class="pl-s"><span class="pl-pds">{</span>In Search of a Robust Facial Expressions Recognition Model: A Large-Scale Visual Cross-Corpus Study<span class="pl-pds">}</span></span>,
  <span class="pl-s">author</span>       = <span class="pl-s"><span class="pl-pds">{</span>Elena Ryumina and Denis Dresvyanskiy and Alexey Karpov<span class="pl-pds">}</span></span>,
  <span class="pl-s">journal</span>      = <span class="pl-s"><span class="pl-pds">{</span>Neurocomputing<span class="pl-pds">}</span></span>,
  <span class="pl-s">year</span>         = <span class="pl-s"><span class="pl-pds">{</span>2022<span class="pl-pds">}</span></span>,
  <span class="pl-s">doi</span>          = <span class="pl-s"><span class="pl-pds">{</span>10.1016/j.neucom.2022.10.013<span class="pl-pds">}</span></span>,
  <span class="pl-s">url</span>          = <span class="pl-s"><span class="pl-pds">{</span>https://www.sciencedirect.com/science/article/pii/S0925231222012656<span class="pl-pds">}</span></span>,
}</div>