Initial Commit
Browse files- README.md +102 -0
- config.json +46 -0
- eval_result_ner.json +1 -0
- model.safetensors +3 -0
- training_args.bin +3 -0
README.md
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
base_model: FacebookAI/xlm-roberta-base
|
5 |
+
tags:
|
6 |
+
- generated_from_trainer
|
7 |
+
metrics:
|
8 |
+
- precision
|
9 |
+
- recall
|
10 |
+
- f1
|
11 |
+
- accuracy
|
12 |
+
model-index:
|
13 |
+
- name: scenario-kd-pre-ner-full-xlmr_data-univner_full66
|
14 |
+
results: []
|
15 |
+
---
|
16 |
+
|
17 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
18 |
+
should probably proofread and complete it, then remove this comment. -->
|
19 |
+
|
20 |
+
# scenario-kd-pre-ner-full-xlmr_data-univner_full66
|
21 |
+
|
22 |
+
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
|
23 |
+
It achieves the following results on the evaluation set:
|
24 |
+
- Loss: 0.4655
|
25 |
+
- Precision: 0.8114
|
26 |
+
- Recall: 0.8137
|
27 |
+
- F1: 0.8126
|
28 |
+
- Accuracy: 0.9807
|
29 |
+
|
30 |
+
## Model description
|
31 |
+
|
32 |
+
More information needed
|
33 |
+
|
34 |
+
## Intended uses & limitations
|
35 |
+
|
36 |
+
More information needed
|
37 |
+
|
38 |
+
## Training and evaluation data
|
39 |
+
|
40 |
+
More information needed
|
41 |
+
|
42 |
+
## Training procedure
|
43 |
+
|
44 |
+
### Training hyperparameters
|
45 |
+
|
46 |
+
The following hyperparameters were used during training:
|
47 |
+
- learning_rate: 3e-05
|
48 |
+
- train_batch_size: 8
|
49 |
+
- eval_batch_size: 32
|
50 |
+
- seed: 66
|
51 |
+
- gradient_accumulation_steps: 4
|
52 |
+
- total_train_batch_size: 32
|
53 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
54 |
+
- lr_scheduler_type: linear
|
55 |
+
- num_epochs: 10
|
56 |
+
|
57 |
+
### Training results
|
58 |
+
|
59 |
+
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|
60 |
+
|:-------------:|:------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
|
61 |
+
| 1.497 | 0.2911 | 500 | 0.9072 | 0.6658 | 0.6611 | 0.6634 | 0.9685 |
|
62 |
+
| 0.7828 | 0.5822 | 1000 | 0.7395 | 0.7001 | 0.7482 | 0.7233 | 0.9729 |
|
63 |
+
| 0.6597 | 0.8732 | 1500 | 0.6711 | 0.7843 | 0.7178 | 0.7496 | 0.9753 |
|
64 |
+
| 0.573 | 1.1643 | 2000 | 0.6213 | 0.7522 | 0.7820 | 0.7668 | 0.9773 |
|
65 |
+
| 0.5148 | 1.4554 | 2500 | 0.6290 | 0.7325 | 0.7919 | 0.7610 | 0.9759 |
|
66 |
+
| 0.497 | 1.7465 | 3000 | 0.5801 | 0.7759 | 0.7780 | 0.7769 | 0.9778 |
|
67 |
+
| 0.4659 | 2.0375 | 3500 | 0.5765 | 0.7926 | 0.7764 | 0.7844 | 0.9786 |
|
68 |
+
| 0.4098 | 2.3286 | 4000 | 0.5585 | 0.7868 | 0.7853 | 0.7860 | 0.9789 |
|
69 |
+
| 0.4085 | 2.6197 | 4500 | 0.5536 | 0.7862 | 0.8042 | 0.7951 | 0.9793 |
|
70 |
+
| 0.3979 | 2.9108 | 5000 | 0.5326 | 0.7902 | 0.8077 | 0.7989 | 0.9798 |
|
71 |
+
| 0.3568 | 3.2019 | 5500 | 0.5366 | 0.7925 | 0.7922 | 0.7924 | 0.9793 |
|
72 |
+
| 0.3523 | 3.4929 | 6000 | 0.5277 | 0.8058 | 0.7870 | 0.7963 | 0.9792 |
|
73 |
+
| 0.3361 | 3.7840 | 6500 | 0.5239 | 0.7851 | 0.8159 | 0.8002 | 0.9792 |
|
74 |
+
| 0.3298 | 4.0751 | 7000 | 0.5126 | 0.7993 | 0.8074 | 0.8033 | 0.9800 |
|
75 |
+
| 0.3053 | 4.3662 | 7500 | 0.5124 | 0.8074 | 0.7961 | 0.8017 | 0.9796 |
|
76 |
+
| 0.3099 | 4.6573 | 8000 | 0.5019 | 0.7953 | 0.8145 | 0.8048 | 0.9799 |
|
77 |
+
| 0.3031 | 4.9483 | 8500 | 0.4978 | 0.8133 | 0.8009 | 0.8071 | 0.9801 |
|
78 |
+
| 0.2834 | 5.2394 | 9000 | 0.5067 | 0.8160 | 0.8044 | 0.8101 | 0.9804 |
|
79 |
+
| 0.2767 | 5.5305 | 9500 | 0.4905 | 0.8104 | 0.8096 | 0.8100 | 0.9804 |
|
80 |
+
| 0.2799 | 5.8216 | 10000 | 0.4812 | 0.8092 | 0.8058 | 0.8075 | 0.9804 |
|
81 |
+
| 0.2735 | 6.1126 | 10500 | 0.4849 | 0.8110 | 0.8104 | 0.8107 | 0.9805 |
|
82 |
+
| 0.261 | 6.4037 | 11000 | 0.4817 | 0.8100 | 0.8114 | 0.8107 | 0.9803 |
|
83 |
+
| 0.2587 | 6.6948 | 11500 | 0.4814 | 0.8127 | 0.8152 | 0.8139 | 0.9810 |
|
84 |
+
| 0.2593 | 6.9859 | 12000 | 0.4812 | 0.8171 | 0.8090 | 0.8130 | 0.9806 |
|
85 |
+
| 0.247 | 7.2770 | 12500 | 0.4816 | 0.8037 | 0.8173 | 0.8104 | 0.9807 |
|
86 |
+
| 0.2452 | 7.5680 | 13000 | 0.4688 | 0.8130 | 0.8117 | 0.8124 | 0.9805 |
|
87 |
+
| 0.2426 | 7.8591 | 13500 | 0.4700 | 0.8130 | 0.8104 | 0.8117 | 0.9806 |
|
88 |
+
| 0.2404 | 8.1502 | 14000 | 0.4680 | 0.8127 | 0.8175 | 0.8151 | 0.9809 |
|
89 |
+
| 0.2347 | 8.4413 | 14500 | 0.4723 | 0.8156 | 0.8160 | 0.8158 | 0.9810 |
|
90 |
+
| 0.2356 | 8.7324 | 15000 | 0.4720 | 0.8115 | 0.8186 | 0.8151 | 0.9807 |
|
91 |
+
| 0.2347 | 9.0234 | 15500 | 0.4634 | 0.8199 | 0.8178 | 0.8188 | 0.9813 |
|
92 |
+
| 0.2301 | 9.3145 | 16000 | 0.4631 | 0.8172 | 0.8158 | 0.8165 | 0.9809 |
|
93 |
+
| 0.2287 | 9.6056 | 16500 | 0.4621 | 0.8125 | 0.8147 | 0.8136 | 0.9808 |
|
94 |
+
| 0.2253 | 9.8967 | 17000 | 0.4655 | 0.8114 | 0.8137 | 0.8126 | 0.9807 |
|
95 |
+
|
96 |
+
|
97 |
+
### Framework versions
|
98 |
+
|
99 |
+
- Transformers 4.44.2
|
100 |
+
- Pytorch 2.1.1+cu121
|
101 |
+
- Datasets 2.14.5
|
102 |
+
- Tokenizers 0.19.1
|
config.json
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "FacebookAI/xlm-roberta-base",
|
3 |
+
"architectures": [
|
4 |
+
"XLMRobertaForTokenClassificationKD"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"bos_token_id": 0,
|
8 |
+
"classifier_dropout": null,
|
9 |
+
"eos_token_id": 2,
|
10 |
+
"hidden_act": "gelu",
|
11 |
+
"hidden_dropout_prob": 0.1,
|
12 |
+
"hidden_size": 768,
|
13 |
+
"id2label": {
|
14 |
+
"0": "LABEL_0",
|
15 |
+
"1": "LABEL_1",
|
16 |
+
"2": "LABEL_2",
|
17 |
+
"3": "LABEL_3",
|
18 |
+
"4": "LABEL_4",
|
19 |
+
"5": "LABEL_5",
|
20 |
+
"6": "LABEL_6"
|
21 |
+
},
|
22 |
+
"initializer_range": 0.02,
|
23 |
+
"intermediate_size": 3072,
|
24 |
+
"label2id": {
|
25 |
+
"LABEL_0": 0,
|
26 |
+
"LABEL_1": 1,
|
27 |
+
"LABEL_2": 2,
|
28 |
+
"LABEL_3": 3,
|
29 |
+
"LABEL_4": 4,
|
30 |
+
"LABEL_5": 5,
|
31 |
+
"LABEL_6": 6
|
32 |
+
},
|
33 |
+
"layer_norm_eps": 1e-05,
|
34 |
+
"max_position_embeddings": 514,
|
35 |
+
"model_type": "xlm-roberta",
|
36 |
+
"num_attention_heads": 12,
|
37 |
+
"num_hidden_layers": 6,
|
38 |
+
"output_past": true,
|
39 |
+
"pad_token_id": 1,
|
40 |
+
"position_embedding_type": "absolute",
|
41 |
+
"torch_dtype": "float32",
|
42 |
+
"transformers_version": "4.44.2",
|
43 |
+
"type_vocab_size": 1,
|
44 |
+
"use_cache": true,
|
45 |
+
"vocab_size": 250002
|
46 |
+
}
|
eval_result_ner.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"ceb_gja": {"precision": 0.4594594594594595, "recall": 0.6938775510204082, "f1": 0.5528455284552845, "accuracy": 0.9583011583011583}, "en_pud": {"precision": 0.7752918287937743, "recall": 0.7413953488372093, "f1": 0.7579648121730861, "accuracy": 0.9774744994333208}, "de_pud": {"precision": 0.7380281690140845, "recall": 0.7564966313763234, "f1": 0.747148288973384, "accuracy": 0.9729501664244526}, "pt_pud": {"precision": 0.8408488063660478, "recall": 0.8653321201091901, "f1": 0.852914798206278, "accuracy": 0.9856880420387064}, "ru_pud": {"precision": 0.7012369172216937, "recall": 0.7113899613899614, "f1": 0.7062769525634883, "accuracy": 0.9706535778868509}, "sv_pud": {"precision": 0.8356299212598425, "recall": 0.8250728862973761, "f1": 0.8303178484107578, "accuracy": 0.9836443698888656}, "tl_trg": {"precision": 0.8333333333333334, "recall": 0.8695652173913043, "f1": 0.851063829787234, "accuracy": 0.9904632152588556}, "tl_ugnayan": {"precision": 0.4634146341463415, "recall": 0.5757575757575758, "f1": 0.5135135135135135, "accuracy": 0.968094804010939}, "zh_gsd": {"precision": 0.8099606815203145, "recall": 0.8057366362451108, "f1": 0.8078431372549019, "accuracy": 0.9725274725274725}, "zh_gsdsimp": {"precision": 0.8225165562913908, "recall": 0.8138925294888598, "f1": 0.8181818181818183, "accuracy": 0.9741924741924742}, "hr_set": {"precision": 0.8877266387726639, "recall": 0.9073414112615823, "f1": 0.8974268593584772, "accuracy": 0.9866859027205276}, "da_ddt": {"precision": 0.8605200945626478, "recall": 0.814317673378076, "f1": 0.8367816091954022, "accuracy": 0.9877282250823107}, "en_ewt": {"precision": 0.7806072477962782, "recall": 0.7325367647058824, "f1": 0.7558084400189664, "accuracy": 0.9747778618958441}, "pt_bosque": {"precision": 0.8312858312858313, "recall": 0.8353909465020576, "f1": 0.8333333333333334, "accuracy": 0.9842414142877843}, "sr_set": {"precision": 0.9259259259259259, "recall": 0.9149940968122786, "f1": 0.9204275534441805, "accuracy": 0.9871289729445758}, "sk_snk": {"precision": 0.7758620689655172, "recall": 0.7377049180327869, "f1": 0.7563025210084033, "accuracy": 0.9665515075376885}, "sv_talbanken": {"precision": 0.8349056603773585, "recall": 0.9030612244897959, "f1": 0.8676470588235294, "accuracy": 0.9974481032536684}}
|
model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8194652d7ecc7cc7565989bf9ce416b9a39b76bc2194d78801252035d674a62e
|
3 |
+
size 939737140
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2fa39205e561ea98697f70a7a891dcd020ac734159bf1699413527f093f07706
|
3 |
+
size 5304
|