Arunavaonly commited on
Commit
b2b3ca5
1 Parent(s): 7ef26f4

End of training

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1624
21
- - F1: 0.9642
22
 
23
  ## Model description
24
 
@@ -44,25 +44,26 @@ The following hyperparameters were used during training:
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - training_steps: 1800
 
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
- | No log | 0.34 | 200 | 0.3384 | 0.8842 |
53
- | No log | 0.68 | 400 | 0.2352 | 0.9358 |
54
- | 0.3357 | 1.02 | 600 | 0.1838 | 0.9458 |
55
- | 0.3357 | 1.36 | 800 | 0.1768 | 0.9520 |
56
- | 0.1794 | 1.69 | 1000 | 0.1650 | 0.9541 |
57
- | 0.1794 | 2.03 | 1200 | 0.1629 | 0.9609 |
58
- | 0.1794 | 2.37 | 1400 | 0.2278 | 0.9566 |
59
- | 0.1305 | 2.71 | 1600 | 0.1633 | 0.9634 |
60
- | 0.1305 | 3.05 | 1800 | 0.1624 | 0.9642 |
61
 
62
 
63
  ### Framework versions
64
 
65
- - Transformers 4.32.1
66
- - Pytorch 2.0.1+cu118
67
- - Datasets 2.14.4
68
- - Tokenizers 0.13.3
 
17
 
18
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 2.7755
21
+ - F1: 0.6113
22
 
23
  ## Model description
24
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - training_steps: 1800
47
+ - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | F1 |
52
  |:-------------:|:-----:|:----:|:---------------:|:------:|
53
+ | No log | 2.53 | 200 | 0.9869 | 0.4635 |
54
+ | No log | 5.06 | 400 | 0.8978 | 0.5858 |
55
+ | 0.8692 | 7.59 | 600 | 1.1978 | 0.6149 |
56
+ | 0.8692 | 10.13 | 800 | 1.5145 | 0.6112 |
57
+ | 0.3138 | 12.66 | 1000 | 2.0353 | 0.6041 |
58
+ | 0.3138 | 15.19 | 1200 | 2.4316 | 0.6203 |
59
+ | 0.3138 | 17.72 | 1400 | 2.6025 | 0.6002 |
60
+ | 0.0769 | 20.25 | 1600 | 2.6247 | 0.6082 |
61
+ | 0.0769 | 22.78 | 1800 | 2.7755 | 0.6113 |
62
 
63
 
64
  ### Framework versions
65
 
66
+ - Transformers 4.37.2
67
+ - Pytorch 2.1.0+cu121
68
+ - Datasets 2.17.1
69
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:282ceeeec9435e62ef1e7fb83044a73b1c356b2c3b89192c3f2f7f4f14cb33b6
3
  size 1112208084
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e05b68f9c58d44dea59716b1c6de992a009dd527baf5d208be29773fa6b7821d
3
  size 1112208084
runs/Feb24_13-59-05_e677ac1d2c6a/events.out.tfevents.1708783148.e677ac1d2c6a.5050.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1ef40d38e1271ec2e828204a1bfa345816d410fc5023e474532ad2eed328453
3
- size 6494
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee12d76fb1e93a558a7c4bed68807d3b36fa466f816a27d1c2ab225c097ccb5b
3
+ size 8273