Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Emotion Recognition From Speech (V1.0)
|
2 |
|
3 |
<p align="justify">The understanding of emotions from voice by a human brain are normal instincts of human beings, but automating the process of emotion recognition from speech without referring any language or linguistic information remains an uphill grind. In the research work presented based on the input speech, I am trying to predict one of the six types of emotions (sad, neutral, happy, fear, angry, disgust). The diagram given below explain how emotion recognition from speech works. The audio features are extracted from input speech, then those features are passed to the emotion recognition model which predicts one of the six emotions for the given input speech.</p>
|
@@ -380,4 +391,4 @@ I would like to thank [@Ricardo]( https://ricardodeazambuja.com/deep_learning/20
|
|
380 |
|
381 |
<p align="justify">Emotion recognition model is finished and its ready and can be used in real time.
|
382 |
The 1130532_ResearchMethodology_Project_Final.ipynb file can be downloaded and used by providing neccesary path changes as mentioned in installation and usage sections.
|
383 |
-
I am looking forward to develop other models mentioned in road-map (future ideas) and integrate all those models with my current emotion recognition model.</p>
|
|
|
1 |
+
---
|
2 |
+
license: gpl-3.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
metrics:
|
6 |
+
- accuracy
|
7 |
+
pipeline_tag: audio-classification
|
8 |
+
tags:
|
9 |
+
- music
|
10 |
+
- code
|
11 |
+
---
|
12 |
# Emotion Recognition From Speech (V1.0)
|
13 |
|
14 |
<p align="justify">The understanding of emotions from voice by a human brain are normal instincts of human beings, but automating the process of emotion recognition from speech without referring any language or linguistic information remains an uphill grind. In the research work presented based on the input speech, I am trying to predict one of the six types of emotions (sad, neutral, happy, fear, angry, disgust). The diagram given below explain how emotion recognition from speech works. The audio features are extracted from input speech, then those features are passed to the emotion recognition model which predicts one of the six emotions for the given input speech.</p>
|
|
|
391 |
|
392 |
<p align="justify">Emotion recognition model is finished and its ready and can be used in real time.
|
393 |
The 1130532_ResearchMethodology_Project_Final.ipynb file can be downloaded and used by providing neccesary path changes as mentioned in installation and usage sections.
|
394 |
+
I am looking forward to develop other models mentioned in road-map (future ideas) and integrate all those models with my current emotion recognition model.</p>
|