uvegesistvan commited on
Commit
8da07f4
1 Parent(s): e18ffdf

model card created

Browse files
Files changed (1) hide show
  1. README.md +89 -3
README.md CHANGED
@@ -1,3 +1,89 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ tags:
5
+ - emotion-classification
6
+ - roberta
7
+ - fine-tuned
8
+ - german
9
+
10
+ license: mit
11
+ datasets:
12
+ - custom
13
+
14
+ model-index:
15
+ - name: Fine-tuned RoBERTa for Emotion Classification in German
16
+ results:
17
+ - task:
18
+ type: text-classification
19
+ name: Emotion Classification in German
20
+ dataset:
21
+ name: German Custom Dataset
22
+ type: text
23
+ metrics:
24
+ - name: Precision (Macro Avg)
25
+ type: precision
26
+ value: 0.85
27
+ - name: Recall (Macro Avg)
28
+ type: recall
29
+ value: 0.85
30
+ - name: F1 Score (Macro Avg)
31
+ type: f1
32
+ value: 0.85
33
+ - name: Accuracy
34
+ type: accuracy
35
+ value: 0.81
36
+
37
+ ---
38
+
39
+ # Fine-tuned RoBERTa Model for Emotion Classification in German
40
+
41
+ ## Model Description
42
+ This model, named **Emotion_RoBERTa_german6_v7**, is a fine-tuned version of the [RoBERTa](https://huggingface.co/roberta-base) model, specifically tailored for emotion classification tasks in German. The model was trained to classify textual data into six emotional categories (**anger, fear, disgust, sadness, joy,** and **none of them**).
43
+
44
+ ## Intended Use
45
+ This model is intended for classifying textual data into emotional categories in the German language. It can be used in applications such as sentiment analysis, social media monitoring, customer feedback analysis, and similar tasks. The model predicts the dominant emotion in a given text among the six predefined categories.
46
+
47
+ ## Metrics
48
+
49
+ | **Class** | **Precision (P)** | **Recall (R)** | **F1-Score (F1)** |
50
+ |-----------------|-------------------|----------------|-------------------|
51
+ | **anger** | 0.69 | 0.79 | 0.74 |
52
+ | **fear** | 0.96 | 0.99 | 0.98 |
53
+ | **disgust** | 0.94 | 0.95 | 0.95 |
54
+ | **sadness** | 0.88 | 0.84 | 0.86 |
55
+ | **joy** | 0.89 | 0.87 | 0.88 |
56
+ | **none of them**| 0.74 | 0.64 | 0.69 |
57
+ | **Accuracy** | | | **0.81** |
58
+ | **Macro Avg** | 0.85 | 0.85 | 0.85 |
59
+ | **Weighted Avg**| 0.85 | 0.81 | 0.81 |
60
+
61
+ ### Overall Performance
62
+ - **Accuracy:** 0.81
63
+ - **Macro Average Precision:** 0.85
64
+ - **Macro Average Recall:** 0.85
65
+ - **Macro Average F1-Score:** 0.85
66
+
67
+ ### Class-wise Performance
68
+ The model demonstrates strong performance in the **fear**, **disgust**, and **joy** categories, with particularly high precision, recall, and F1 scores. The model performs moderately well in detecting **anger** and **none of them** categories, indicating potential areas for improvement.
69
+
70
+ ## Limitations
71
+ - **Context Sensitivity:** The model may struggle with recognizing emotions that require deeper contextual understanding.
72
+ - **Class Imbalance:** The model's performance on the "none of them" category suggests that further training with more balanced datasets could improve accuracy.
73
+ - **Generalization:** The model's performance may vary depending on the text's domain, language style, and length, especially across different languages.
74
+
75
+ ## Training Data
76
+ The model was fine-tuned on a custom German dataset containing textual samples labeled across six emotional categories. The dataset's distribution was considered during training to ensure balanced performance across classes.
77
+
78
+ ## How to Use
79
+ You can use this model directly with the `transformers` library from Hugging Face. Below is an example of how to load and use the model:
80
+
81
+ ```python
82
+ from transformers import pipeline
83
+
84
+ # Load the fine-tuned model
85
+ classifier = pipeline("text-classification", model="visegradmedia-emotion/Emotion_RoBERTa_german6_v7")
86
+
87
+ # Example usage
88
+ result = classifier("Heute fühle ich mich sehr glücklich!")
89
+ print(result)