danielribeiro commited on
Commit
a7637b3
1 Parent(s): 75724d2

google-play-sentiment-analysis

Browse files
Files changed (1) hide show
  1. README.md +18 -11
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: mit
3
- base_model: neuralmind/bert-base-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -15,10 +15,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # google-play-sentiment-analysis
17
 
18
- This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.3737
21
- - Accuracy: 0.457
22
 
23
  ## Model description
24
 
@@ -43,15 +43,22 @@ The following hyperparameters were used during training:
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 3.0
47
 
48
  ### Training results
49
 
50
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | No log | 1.0 | 125 | 1.3050 | 0.437 |
53
- | No log | 2.0 | 250 | 1.2939 | 0.448 |
54
- | No log | 3.0 | 375 | 1.3737 | 0.457 |
 
 
 
 
 
 
 
55
 
56
 
57
  ### Framework versions
@@ -59,4 +66,4 @@ The following hyperparameters were used during training:
59
  - Transformers 4.35.2
60
  - Pytorch 2.1.0+cu121
61
  - Datasets 2.16.1
62
- - Tokenizers 0.15.0
 
1
  ---
2
  license: mit
3
+ base_model: neuralmind/bert-large-portuguese-cased
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
15
 
16
  # google-play-sentiment-analysis
17
 
18
+ This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0987
21
+ - Accuracy: 0.3229
22
 
23
  ## Model description
24
 
 
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
+ - num_epochs: 10
47
 
48
  ### Training results
49
 
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
52
+ | 0.8592 | 1.0 | 1200 | 0.8029 | 0.6279 |
53
+ | 0.9076 | 2.0 | 2400 | 1.1314 | 0.3229 |
54
+ | 1.1221 | 3.0 | 3600 | 1.1170 | 0.3229 |
55
+ | 1.1211 | 4.0 | 4800 | 1.1020 | 0.3362 |
56
+ | 1.115 | 5.0 | 6000 | 1.1015 | 0.3408 |
57
+ | 1.12 | 6.0 | 7200 | 1.0989 | 0.3408 |
58
+ | 1.1153 | 7.0 | 8400 | 1.0989 | 0.3408 |
59
+ | 1.1123 | 8.0 | 9600 | 1.0993 | 0.3408 |
60
+ | 1.1125 | 9.0 | 10800 | 1.0987 | 0.3408 |
61
+ | 1.1109 | 10.0 | 12000 | 1.0987 | 0.3229 |
62
 
63
 
64
  ### Framework versions
 
66
  - Transformers 4.35.2
67
  - Pytorch 2.1.0+cu121
68
  - Datasets 2.16.1
69
+ - Tokenizers 0.15.1