uvegesistvan commited on
Commit
c24a2fc
1 Parent(s): 83be881

Update README.md

Browse files

model card created

Files changed (1) hide show
  1. README.md +94 -3
README.md CHANGED
@@ -1,3 +1,94 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pl
4
+ tags:
5
+ - emotion-classification
6
+ - roberta
7
+ - fine-tuned
8
+ - polish
9
+
10
+ license: mit
11
+ datasets:
12
+ - custom
13
+
14
+ model-index:
15
+ - name: Fine-tuned RoBERTa for Emotion Classification in Polish
16
+ results:
17
+ - task:
18
+ type: text-classification
19
+ name: Emotion Classification in Polish
20
+ dataset:
21
+ name: Polish Custom Dataset
22
+ type: text
23
+ metrics:
24
+ - name: Precision (Macro Avg)
25
+ type: precision
26
+ value: 0.86
27
+ - name: Recall (Macro Avg)
28
+ type: recall
29
+ value: 0.86
30
+ - name: F1 Score (Macro Avg)
31
+ type: f1
32
+ value: 0.85
33
+ - name: Accuracy
34
+ type: accuracy
35
+ value: 0.82
36
+
37
+ ---
38
+
39
+ # Fine-tuned RoBERTa Model for Emotion Classification in Polish
40
+
41
+ ## Model Description
42
+ This model is a fine-tuned version of the [RoBERTa](https://huggingface.co/roberta-base) model, specifically tailored for emotion classification tasks in Polish.
43
+ The model was trained to classify textual data into six emotional categories (**anger, fear, disgust, sadness, joy,** and **none of them**).
44
+
45
+ ## Intended Use
46
+ This model is intended for classifying textual data into emotional categories in the Polish language.
47
+ It can be used in applications such as sentiment analysis, social media monitoring, customer feedback analysis, and similar tasks.
48
+ The model predicts the dominant emotion in a given text among the six predefined categories.
49
+
50
+ ## Metrics
51
+
52
+ | **Class** | **Precision (P)** | **Recall (R)** | **F1-Score (F1)** |
53
+ |-----------------|-------------------|----------------|-------------------|
54
+ | **anger** | 0.70 | 0.81 | 0.75 |
55
+ | **fear** | 0.96 | 0.96 | 0.98 |
56
+ | **disgust** | 0.97 | 0.97 | 0.95 |
57
+ | **sadness** | 0.87 | 0.87 | 0.86 |
58
+ | **joy** | 0.91 | 0.91 | 0.89 |
59
+ | **none of them**| 0.75 | 0.75 | 0.70 |
60
+ | **Accuracy** | | | **0.82** |
61
+ | **Macro Avg** | 0.86 | 0.86 | 0.85 |
62
+ | **Weighted Avg**| 0.83 | 0.83 | 0.82 |
63
+
64
+ ### Overall Performance
65
+ - **Accuracy:** 0.82
66
+ - **Macro Average Precision:** 0.86
67
+ - **Macro Average Recall:** 0.86
68
+ - **Macro Average F1-Score:** 0.85
69
+
70
+ ### Class-wise Performance
71
+ The model demonstrates strong performance in the **fear**, **disgust**, and **joy** categories, with particularly high precision, recall, and F1 scores.
72
+ The model performs moderately well in detecting **anger** and **none of them** categories, indicating potential areas for improvement.
73
+
74
+ ## Limitations
75
+ - **Context Sensitivity:** The model may struggle with recognizing emotions that require deeper contextual understanding.
76
+ - **Class Imbalance:** The model's performance on the "none of them" category suggests that further training with more balanced datasets could improve accuracy.
77
+ - **Generalization:** The model's performance may vary depending on the text's domain, language style, and length, especially across different languages.
78
+
79
+ ## Training Data
80
+ The model was fine-tuned on a custom Polish dataset containing textual samples labeled across six emotional categories.
81
+ The dataset's distribution was considered during training to ensure balanced performance across classes.
82
+
83
+ ## How to Use
84
+ You can use this model directly with the `transformers` library from Hugging Face. Below is an example of how to load and use the model:
85
+
86
+ ```python
87
+ from transformers import pipeline
88
+
89
+ # Load the fine-tuned model
90
+ classifier = pipeline("text-classification", model="your-model-name")
91
+
92
+ # Example usage
93
+ result = classifier("Czuję się dziś bardzo szczęśliwy!")
94
+ print(result)