Conrad747 commited on
Commit
29f3267
1 Parent(s): 8c3f6a2

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -23
README.md CHANGED
@@ -19,21 +19,21 @@ model-index:
19
  name: lg-ner
20
  type: lg-ner
21
  config: lug
22
- split: train
23
  args: lug
24
  metrics:
25
  - name: Precision
26
  type: precision
27
- value: 0.29015544041450775
28
  - name: Recall
29
  type: recall
30
- value: 0.27722772277227725
31
  - name: F1
32
  type: f1
33
- value: 0.2835443037974684
34
  - name: Accuracy
35
  type: accuracy
36
- value: 0.7297843665768194
37
  ---
38
 
39
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -43,11 +43,11 @@ should probably proofread and complete it, then remove this comment. -->
43
 
44
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lg-ner dataset.
45
  It achieves the following results on the evaluation set:
46
- - Loss: 1.0530
47
- - Precision: 0.2902
48
- - Recall: 0.2772
49
- - F1: 0.2835
50
- - Accuracy: 0.7298
51
 
52
  ## Model description
53
 
@@ -78,21 +78,21 @@ The following hyperparameters were used during training:
78
 
79
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
- | No log | 1.0 | 25 | 1.2878 | 0.0 | 0.0 | 0.0 | 0.7271 |
82
- | No log | 2.0 | 50 | 1.2373 | 0.0 | 0.0 | 0.0 | 0.7271 |
83
- | No log | 3.0 | 75 | 1.2309 | 0.3542 | 0.1683 | 0.2282 | 0.7244 |
84
- | No log | 4.0 | 100 | 1.1505 | 0.2712 | 0.2376 | 0.2533 | 0.7183 |
85
- | No log | 5.0 | 125 | 1.1360 | 0.2579 | 0.2426 | 0.25 | 0.7170 |
86
- | No log | 6.0 | 150 | 1.0932 | 0.3108 | 0.2277 | 0.2629 | 0.7338 |
87
- | No log | 7.0 | 175 | 1.0761 | 0.2989 | 0.2574 | 0.2766 | 0.7298 |
88
- | No log | 8.0 | 200 | 1.0645 | 0.2805 | 0.3069 | 0.2931 | 0.7244 |
89
- | No log | 9.0 | 225 | 1.0577 | 0.3022 | 0.2723 | 0.2865 | 0.7325 |
90
- | No log | 10.0 | 250 | 1.0530 | 0.2902 | 0.2772 | 0.2835 | 0.7298 |
91
 
92
 
93
  ### Framework versions
94
 
95
- - Transformers 4.24.0
96
- - Pytorch 1.12.1+cu113
97
- - Datasets 2.7.1
98
  - Tokenizers 0.13.2
 
19
  name: lg-ner
20
  type: lg-ner
21
  config: lug
22
+ split: test
23
  args: lug
24
  metrics:
25
  - name: Precision
26
  type: precision
27
+ value: 0.9370212765957446
28
  - name: Recall
29
  type: recall
30
+ value: 0.9359591952394446
31
  - name: F1
32
  type: f1
33
+ value: 0.9364899347887723
34
  - name: Accuracy
35
  type: accuracy
36
+ value: 0.9824210946863764
37
  ---
38
 
39
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
43
 
44
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the lg-ner dataset.
45
  It achieves the following results on the evaluation set:
46
+ - Loss: 0.0908
47
+ - Precision: 0.9370
48
+ - Recall: 0.9360
49
+ - F1: 0.9365
50
+ - Accuracy: 0.9824
51
 
52
  ## Model description
53
 
 
78
 
79
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
+ | 0.5792 | 1.0 | 609 | 0.2463 | 0.7259 | 0.7662 | 0.7455 | 0.9406 |
82
+ | 0.2271 | 2.0 | 1218 | 0.1587 | 0.8198 | 0.8782 | 0.8480 | 0.9607 |
83
+ | 0.1652 | 3.0 | 1827 | 0.1289 | 0.8612 | 0.8918 | 0.8762 | 0.9677 |
84
+ | 0.1266 | 4.0 | 2436 | 0.1083 | 0.8990 | 0.9059 | 0.9025 | 0.9744 |
85
+ | 0.081 | 5.0 | 3045 | 0.1043 | 0.9183 | 0.9147 | 0.9165 | 0.9767 |
86
+ | 0.0676 | 6.0 | 3654 | 0.0893 | 0.9261 | 0.9334 | 0.9297 | 0.9811 |
87
+ | 0.0565 | 7.0 | 4263 | 0.0877 | 0.9389 | 0.9368 | 0.9379 | 0.9813 |
88
+ | 0.0519 | 8.0 | 4872 | 0.0919 | 0.9404 | 0.9340 | 0.9372 | 0.9819 |
89
+ | 0.047 | 9.0 | 5481 | 0.0896 | 0.9376 | 0.9360 | 0.9368 | 0.9825 |
90
+ | 0.0379 | 10.0 | 6090 | 0.0908 | 0.9370 | 0.9360 | 0.9365 | 0.9824 |
91
 
92
 
93
  ### Framework versions
94
 
95
+ - Transformers 4.26.1
96
+ - Pytorch 1.13.1+cu116
97
+ - Datasets 2.10.1
98
  - Tokenizers 0.13.2