File size: 3,164 Bytes
9059152
 
b986b74
 
9059152
 
53c19f6
 
 
 
b986b74
9059152
 
b986b74
9059152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b986b74
 
 
9059152
 
 
 
 
 
1578462
8f39606
1578462
 
 
 
 
447b976
9059152
 
 
0a796ce
9059152
26b62c7
9059152
26b62c7
 
 
9059152
26b62c7
 
9059152
ae0eab9
 
 
9059152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b986b74
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
thumbnail: >-
  https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png
tags:
- generated_from_trainer
- financial
- stocks
- sentiment
widget:
- text: Operating profit totaled EUR 9.4 mn , down from EUR 11.7 mn in 2004 .
datasets:
- financial_phrasebank
- carblacac/twitter-sentiment-analysis
metrics:
- accuracy
model-index:
- name: distilRoberta-financial-sentiment
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: financial_phrasebank
      type: financial_phrasebank
      args: sentences_allagree
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9823008849557522
language:
- nl
- en
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->


<div style="text-align:center;width:250px;height:250px;">
    <img src="https://huggingface.co/mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis/resolve/main/logo_no_bg.png" alt="logo">
</div>


# DistilRoberta-financial-sentiment


This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1116
- Accuracy: **0.98**23

## Base Model description

This model is a distilled version of the [RoBERTa-base model](https://huggingface.co/roberta-base). It follows the same training procedure as [DistilBERT](https://huggingface.co/distilbert-base-uncased).
The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation).
This model is case-sensitive: it makes a difference between English and English.

The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base).
On average DistilRoBERTa is twice as fast as Roberta-base.

## Training Data

Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log        | 1.0   | 255  | 0.1670          | 0.9646   |
| 0.209         | 2.0   | 510  | 0.2290          | 0.9558   |
| 0.209         | 3.0   | 765  | 0.2044          | 0.9558   |
| 0.0326        | 4.0   | 1020 | 0.1116          | 0.9823   |
| 0.0326        | 5.0   | 1275 | 0.1127          | 0.9779   |


### Framework versions

- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3