haryoaw commited on
Commit
4241d49
1 Parent(s): 1b847e7

Initial Commit

Browse files
Files changed (5) hide show
  1. README.md +60 -51
  2. config.json +3 -3
  3. eval_result_ner.json +1 -1
  4. model.safetensors +2 -2
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,14 +1,14 @@
1
  ---
2
- base_model: haryoaw/scenario-TCR-NER_data-univner_half
3
  library_name: transformers
4
  license: mit
 
 
 
5
  metrics:
6
  - precision
7
  - recall
8
  - f1
9
  - accuracy
10
- tags:
11
- - generated_from_trainer
12
  model-index:
13
  - name: scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66
14
  results: []
@@ -19,13 +19,13 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66
21
 
22
- This model is a fine-tuned version of [haryoaw/scenario-TCR-NER_data-univner_half](https://huggingface.co/haryoaw/scenario-TCR-NER_data-univner_half) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.1403
25
- - Precision: 0.8613
26
- - Recall: 0.8638
27
- - F1: 0.8626
28
- - Accuracy: 0.9848
29
 
30
  ## Model description
31
 
@@ -56,48 +56,57 @@ The following hyperparameters were used during training:
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
  |:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
- | 0.0064 | 0.5828 | 500 | 0.0946 | 0.8529 | 0.8691 | 0.8609 | 0.9847 |
60
- | 0.0076 | 1.1655 | 1000 | 0.0939 | 0.8503 | 0.8562 | 0.8532 | 0.9842 |
61
- | 0.0059 | 1.7483 | 1500 | 0.0980 | 0.8583 | 0.8515 | 0.8549 | 0.9840 |
62
- | 0.005 | 2.3310 | 2000 | 0.1052 | 0.8469 | 0.8618 | 0.8543 | 0.9840 |
63
- | 0.0054 | 2.9138 | 2500 | 0.1025 | 0.8389 | 0.8699 | 0.8541 | 0.9841 |
64
- | 0.0045 | 3.4965 | 3000 | 0.1032 | 0.8371 | 0.8696 | 0.8530 | 0.9836 |
65
- | 0.0042 | 4.0793 | 3500 | 0.1088 | 0.8459 | 0.8642 | 0.8550 | 0.9840 |
66
- | 0.0032 | 4.6620 | 4000 | 0.1182 | 0.8301 | 0.8691 | 0.8492 | 0.9828 |
67
- | 0.0035 | 5.2448 | 4500 | 0.1164 | 0.8486 | 0.8611 | 0.8548 | 0.9841 |
68
- | 0.0031 | 5.8275 | 5000 | 0.1190 | 0.8352 | 0.8602 | 0.8475 | 0.9836 |
69
- | 0.0029 | 6.4103 | 5500 | 0.1197 | 0.8516 | 0.8694 | 0.8604 | 0.9843 |
70
- | 0.0029 | 6.9930 | 6000 | 0.1177 | 0.8282 | 0.8674 | 0.8474 | 0.9833 |
71
- | 0.0024 | 7.5758 | 6500 | 0.1219 | 0.8396 | 0.8680 | 0.8536 | 0.9845 |
72
- | 0.0031 | 8.1585 | 7000 | 0.1160 | 0.8566 | 0.8559 | 0.8562 | 0.9846 |
73
- | 0.002 | 8.7413 | 7500 | 0.1222 | 0.8385 | 0.8624 | 0.8503 | 0.9834 |
74
- | 0.0021 | 9.3240 | 8000 | 0.1217 | 0.8522 | 0.8667 | 0.8594 | 0.9847 |
75
- | 0.0019 | 9.9068 | 8500 | 0.1333 | 0.8222 | 0.8699 | 0.8453 | 0.9835 |
76
- | 0.002 | 10.4895 | 9000 | 0.1210 | 0.8475 | 0.8665 | 0.8569 | 0.9845 |
77
- | 0.0017 | 11.0723 | 9500 | 0.1192 | 0.8571 | 0.8642 | 0.8606 | 0.9849 |
78
- | 0.0013 | 11.6550 | 10000 | 0.1329 | 0.8524 | 0.8716 | 0.8619 | 0.9848 |
79
- | 0.0016 | 12.2378 | 10500 | 0.1337 | 0.8493 | 0.8700 | 0.8595 | 0.9844 |
80
- | 0.0014 | 12.8205 | 11000 | 0.1245 | 0.8635 | 0.8707 | 0.8671 | 0.9854 |
81
- | 0.0014 | 13.4033 | 11500 | 0.1299 | 0.8611 | 0.8595 | 0.8603 | 0.9849 |
82
- | 0.0012 | 13.9860 | 12000 | 0.1229 | 0.8545 | 0.8657 | 0.8600 | 0.9848 |
83
- | 0.0011 | 14.5688 | 12500 | 0.1258 | 0.8585 | 0.8631 | 0.8608 | 0.9849 |
84
- | 0.0008 | 15.1515 | 13000 | 0.1377 | 0.8558 | 0.8658 | 0.8608 | 0.9847 |
85
- | 0.001 | 15.7343 | 13500 | 0.1328 | 0.8576 | 0.8611 | 0.8593 | 0.9846 |
86
- | 0.0008 | 16.3170 | 14000 | 0.1331 | 0.8596 | 0.8660 | 0.8628 | 0.9850 |
87
- | 0.0008 | 16.8998 | 14500 | 0.1292 | 0.8549 | 0.8694 | 0.8621 | 0.9849 |
88
- | 0.0008 | 17.4825 | 15000 | 0.1388 | 0.8496 | 0.8699 | 0.8596 | 0.9846 |
89
- | 0.0008 | 18.0653 | 15500 | 0.1364 | 0.8577 | 0.8629 | 0.8603 | 0.9848 |
90
- | 0.0005 | 18.6480 | 16000 | 0.1419 | 0.8627 | 0.8645 | 0.8636 | 0.9848 |
91
- | 0.0007 | 19.2308 | 16500 | 0.1414 | 0.8569 | 0.8709 | 0.8638 | 0.9850 |
92
- | 0.0005 | 19.8135 | 17000 | 0.1369 | 0.8513 | 0.8700 | 0.8606 | 0.9848 |
93
- | 0.0004 | 20.3963 | 17500 | 0.1419 | 0.8580 | 0.8658 | 0.8619 | 0.9849 |
94
- | 0.0004 | 20.9790 | 18000 | 0.1452 | 0.8598 | 0.8700 | 0.8649 | 0.9849 |
95
- | 0.0005 | 21.5618 | 18500 | 0.1417 | 0.8540 | 0.8673 | 0.8606 | 0.9842 |
96
- | 0.0003 | 22.1445 | 19000 | 0.1419 | 0.8667 | 0.8611 | 0.8639 | 0.9848 |
97
- | 0.0003 | 22.7273 | 19500 | 0.1500 | 0.8588 | 0.8632 | 0.8610 | 0.9845 |
98
- | 0.0004 | 23.3100 | 20000 | 0.1470 | 0.8557 | 0.8717 | 0.8636 | 0.9846 |
99
- | 0.0004 | 23.8928 | 20500 | 0.1387 | 0.8652 | 0.8671 | 0.8662 | 0.9852 |
100
- | 0.0002 | 24.4755 | 21000 | 0.1403 | 0.8613 | 0.8638 | 0.8626 | 0.9848 |
 
 
 
 
 
 
 
 
 
101
 
102
 
103
  ### Framework versions
 
1
  ---
 
2
  library_name: transformers
3
  license: mit
4
+ base_model: microsoft/mdeberta-v3-base
5
+ tags:
6
+ - generated_from_trainer
7
  metrics:
8
  - precision
9
  - recall
10
  - f1
11
  - accuracy
 
 
12
  model-index:
13
  - name: scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66
14
  results: []
 
19
 
20
  # scenario-non-kd-pre-ner-full-mdeberta_data-univner_half66
21
 
22
+ This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.1996
25
+ - Precision: 0.7561
26
+ - Recall: 0.7723
27
+ - F1: 0.7641
28
+ - Accuracy: 0.9755
29
 
30
  ## Model description
31
 
 
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
  |:-------------:|:-------:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
+ | 0.304 | 0.5828 | 500 | 0.1829 | 0.3627 | 0.3414 | 0.3517 | 0.9389 |
60
+ | 0.155 | 1.1655 | 1000 | 0.1258 | 0.5315 | 0.6722 | 0.5937 | 0.9577 |
61
+ | 0.0973 | 1.7483 | 1500 | 0.1032 | 0.6320 | 0.7086 | 0.6681 | 0.9670 |
62
+ | 0.0707 | 2.3310 | 2000 | 0.1055 | 0.6478 | 0.7443 | 0.6927 | 0.9683 |
63
+ | 0.0584 | 2.9138 | 2500 | 0.1007 | 0.6755 | 0.7436 | 0.7079 | 0.9698 |
64
+ | 0.0431 | 3.4965 | 3000 | 0.1186 | 0.6613 | 0.7552 | 0.7051 | 0.9686 |
65
+ | 0.0392 | 4.0793 | 3500 | 0.1087 | 0.6946 | 0.7627 | 0.7270 | 0.9718 |
66
+ | 0.0273 | 4.6620 | 4000 | 0.1095 | 0.7451 | 0.7286 | 0.7367 | 0.9736 |
67
+ | 0.0233 | 5.2448 | 4500 | 0.1259 | 0.6876 | 0.7772 | 0.7297 | 0.9719 |
68
+ | 0.0189 | 5.8275 | 5000 | 0.1201 | 0.7222 | 0.7526 | 0.7371 | 0.9730 |
69
+ | 0.0161 | 6.4103 | 5500 | 0.1339 | 0.7334 | 0.7342 | 0.7338 | 0.9734 |
70
+ | 0.0153 | 6.9930 | 6000 | 0.1338 | 0.7226 | 0.7533 | 0.7376 | 0.9737 |
71
+ | 0.0109 | 7.5758 | 6500 | 0.1320 | 0.7345 | 0.7634 | 0.7486 | 0.9742 |
72
+ | 0.0108 | 8.1585 | 7000 | 0.1427 | 0.7189 | 0.7617 | 0.7397 | 0.9728 |
73
+ | 0.0089 | 8.7413 | 7500 | 0.1423 | 0.7268 | 0.7647 | 0.7453 | 0.9738 |
74
+ | 0.0075 | 9.3240 | 8000 | 0.1471 | 0.7332 | 0.7638 | 0.7482 | 0.9740 |
75
+ | 0.0071 | 9.9068 | 8500 | 0.1501 | 0.7502 | 0.7466 | 0.7484 | 0.9744 |
76
+ | 0.0064 | 10.4895 | 9000 | 0.1558 | 0.7133 | 0.7813 | 0.7458 | 0.9734 |
77
+ | 0.0058 | 11.0723 | 9500 | 0.1495 | 0.7514 | 0.7599 | 0.7556 | 0.9750 |
78
+ | 0.0047 | 11.6550 | 10000 | 0.1632 | 0.7146 | 0.7627 | 0.7379 | 0.9727 |
79
+ | 0.0043 | 12.2378 | 10500 | 0.1707 | 0.7259 | 0.7699 | 0.7472 | 0.9740 |
80
+ | 0.0039 | 12.8205 | 11000 | 0.1648 | 0.7415 | 0.7608 | 0.7510 | 0.9742 |
81
+ | 0.0041 | 13.4033 | 11500 | 0.1717 | 0.7247 | 0.7693 | 0.7463 | 0.9739 |
82
+ | 0.0036 | 13.9860 | 12000 | 0.1711 | 0.7251 | 0.7687 | 0.7463 | 0.9737 |
83
+ | 0.0029 | 14.5688 | 12500 | 0.1757 | 0.7275 | 0.7751 | 0.7505 | 0.9741 |
84
+ | 0.0026 | 15.1515 | 13000 | 0.1817 | 0.7527 | 0.7563 | 0.7545 | 0.9744 |
85
+ | 0.0027 | 15.7343 | 13500 | 0.1779 | 0.7534 | 0.7554 | 0.7544 | 0.9750 |
86
+ | 0.0028 | 16.3170 | 14000 | 0.1826 | 0.7505 | 0.7578 | 0.7541 | 0.9747 |
87
+ | 0.0026 | 16.8998 | 14500 | 0.1793 | 0.7573 | 0.7681 | 0.7627 | 0.9755 |
88
+ | 0.0022 | 17.4825 | 15000 | 0.1803 | 0.7460 | 0.7722 | 0.7589 | 0.9753 |
89
+ | 0.002 | 18.0653 | 15500 | 0.1858 | 0.7331 | 0.7807 | 0.7561 | 0.9746 |
90
+ | 0.0018 | 18.6480 | 16000 | 0.1875 | 0.7451 | 0.7624 | 0.7536 | 0.9740 |
91
+ | 0.0018 | 19.2308 | 16500 | 0.1896 | 0.7484 | 0.7661 | 0.7572 | 0.9749 |
92
+ | 0.0014 | 19.8135 | 17000 | 0.1862 | 0.7592 | 0.7689 | 0.7640 | 0.9755 |
93
+ | 0.0018 | 20.3963 | 17500 | 0.1936 | 0.7559 | 0.7567 | 0.7563 | 0.9749 |
94
+ | 0.0014 | 20.9790 | 18000 | 0.1908 | 0.7514 | 0.7680 | 0.7596 | 0.9751 |
95
+ | 0.0012 | 21.5618 | 18500 | 0.1956 | 0.7464 | 0.7692 | 0.7576 | 0.9751 |
96
+ | 0.0015 | 22.1445 | 19000 | 0.1986 | 0.7352 | 0.7751 | 0.7546 | 0.9746 |
97
+ | 0.0012 | 22.7273 | 19500 | 0.1936 | 0.7277 | 0.7804 | 0.7531 | 0.9746 |
98
+ | 0.001 | 23.3100 | 20000 | 0.1975 | 0.7358 | 0.7781 | 0.7564 | 0.9749 |
99
+ | 0.0011 | 23.8928 | 20500 | 0.1956 | 0.7485 | 0.7749 | 0.7615 | 0.9754 |
100
+ | 0.001 | 24.4755 | 21000 | 0.1950 | 0.7522 | 0.7728 | 0.7624 | 0.9754 |
101
+ | 0.0009 | 25.0583 | 21500 | 0.1958 | 0.7522 | 0.7713 | 0.7616 | 0.9755 |
102
+ | 0.0006 | 25.6410 | 22000 | 0.1998 | 0.7454 | 0.7741 | 0.7595 | 0.9751 |
103
+ | 0.0006 | 26.2238 | 22500 | 0.2026 | 0.7496 | 0.7725 | 0.7609 | 0.9753 |
104
+ | 0.0008 | 26.8065 | 23000 | 0.1991 | 0.7609 | 0.7638 | 0.7623 | 0.9755 |
105
+ | 0.0006 | 27.3893 | 23500 | 0.1962 | 0.7547 | 0.7772 | 0.7658 | 0.9758 |
106
+ | 0.0006 | 27.9720 | 24000 | 0.1995 | 0.7551 | 0.7728 | 0.7638 | 0.9755 |
107
+ | 0.0005 | 28.5548 | 24500 | 0.2003 | 0.7538 | 0.7738 | 0.7636 | 0.9754 |
108
+ | 0.0006 | 29.1375 | 25000 | 0.1996 | 0.7574 | 0.7694 | 0.7634 | 0.9755 |
109
+ | 0.0005 | 29.7203 | 25500 | 0.1996 | 0.7561 | 0.7723 | 0.7641 | 0.9755 |
110
 
111
 
112
  ### Framework versions
config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
- "_name_or_path": "haryoaw/scenario-TCR-NER_data-univner_half",
3
  "architectures": [
4
- "DebertaV2ForTokenClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "hidden_act": "gelu",
@@ -33,7 +33,7 @@
33
  "model_type": "deberta-v2",
34
  "norm_rel_ebd": "layer_norm",
35
  "num_attention_heads": 12,
36
- "num_hidden_layers": 12,
37
  "pad_token_id": 0,
38
  "pooler_dropout": 0,
39
  "pooler_hidden_act": "gelu",
 
1
  {
2
+ "_name_or_path": "microsoft/mdeberta-v3-base",
3
  "architectures": [
4
+ "DebertaForTokenClassification"
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "hidden_act": "gelu",
 
33
  "model_type": "deberta-v2",
34
  "norm_rel_ebd": "layer_norm",
35
  "num_attention_heads": 12,
36
+ "num_hidden_layers": 6,
37
  "pad_token_id": 0,
38
  "pooler_dropout": 0,
39
  "pooler_hidden_act": "gelu",
eval_result_ner.json CHANGED
@@ -1 +1 @@
1
- {"ceb_gja": {"precision": 0.9583333333333334, "recall": 0.9387755102040817, "f1": 0.9484536082474228, "accuracy": 0.9961389961389961}, "en_pud": {"precision": 0.8153277931671283, "recall": 0.8213953488372093, "f1": 0.8183503243744208, "accuracy": 0.9823857196826596}, "de_pud": {"precision": 0.8085106382978723, "recall": 0.8411934552454283, "f1": 0.8245283018867925, "accuracy": 0.9812010688669073}, "pt_pud": {"precision": 0.8820840950639853, "recall": 0.8780709736123748, "f1": 0.8800729594163246, "accuracy": 0.988294100055539}, "ru_pud": {"precision": 0.7153088630259624, "recall": 0.7712355212355212, "f1": 0.7422201579191826, "accuracy": 0.9742185481787652}, "sv_pud": {"precision": 0.8851351351351351, "recall": 0.891156462585034, "f1": 0.888135593220339, "accuracy": 0.9882574963304676}, "tl_trg": {"precision": 0.9166666666666666, "recall": 0.9565217391304348, "f1": 0.9361702127659574, "accuracy": 0.9959128065395095}, "tl_ugnayan": {"precision": 0.6923076923076923, "recall": 0.8181818181818182, "f1": 0.7500000000000001, "accuracy": 0.9799453053783045}, "zh_gsd": {"precision": 0.8670967741935484, "recall": 0.8761408083441982, "f1": 0.8715953307392995, "accuracy": 0.9818514818514819}, "zh_gsdsimp": {"precision": 0.8878627968337731, "recall": 0.8820445609436435, "f1": 0.8849441157133466, "accuracy": 0.9826839826839827}, "hr_set": {"precision": 0.9203910614525139, "recall": 0.9394155381325731, "f1": 0.9298059964726632, "accuracy": 0.990684253915911}, "da_ddt": {"precision": 0.8744186046511628, "recall": 0.8411633109619687, "f1": 0.8574686431014824, "accuracy": 0.9888257008879577}, "en_ewt": {"precision": 0.8473967684021544, "recall": 0.8676470588235294, "f1": 0.857402361489555, "accuracy": 0.9841415308602622}, "pt_bosque": {"precision": 0.9082167832167832, "recall": 0.8551440329218107, "f1": 0.8808817295464179, "accuracy": 0.9869584118243733}, "sr_set": {"precision": 0.9491725768321513, "recall": 0.948051948051948, "f1": 0.9486119314825754, "accuracy": 0.9908064092461255}, "sk_snk": {"precision": 0.8186695278969958, "recall": 0.833879781420765, "f1": 0.826204656199242, "accuracy": 0.9762091708542714}, "sv_talbanken": {"precision": 0.8227272727272728, "recall": 0.923469387755102, "f1": 0.8701923076923076, "accuracy": 0.997399028316239}}
 
1
+ {"ceb_gja": {"precision": 0.4864864864864865, "recall": 0.7346938775510204, "f1": 0.5853658536585366, "accuracy": 0.9583011583011583}, "en_pud": {"precision": 0.7389597644749755, "recall": 0.7004651162790698, "f1": 0.7191977077363897, "accuracy": 0.97369663770306}, "de_pud": {"precision": 0.6733067729083665, "recall": 0.6506256015399422, "f1": 0.661771904062653, "accuracy": 0.964699263981998}, "pt_pud": {"precision": 0.6958614051973051, "recall": 0.6578707916287534, "f1": 0.676333021515435, "accuracy": 0.9695817490494296}, "ru_pud": {"precision": 0.6028571428571429, "recall": 0.611003861003861, "f1": 0.6069031639501438, "accuracy": 0.9617669852751227}, "sv_pud": {"precision": 0.7585492227979275, "recall": 0.7113702623906706, "f1": 0.7342026078234705, "accuracy": 0.9749947578108619}, "tl_trg": {"precision": 0.425, "recall": 0.7391304347826086, "f1": 0.5396825396825397, "accuracy": 0.9550408719346049}, "tl_ugnayan": {"precision": 0.45098039215686275, "recall": 0.696969696969697, "f1": 0.5476190476190477, "accuracy": 0.9608021877848678}, "zh_gsd": {"precision": 0.7667560321715817, "recall": 0.7457627118644068, "f1": 0.7561136814276272, "accuracy": 0.9676157176157176}, "zh_gsdsimp": {"precision": 0.776, "recall": 0.762778505897772, "f1": 0.7693324520819564, "accuracy": 0.9708624708624709}, "hr_set": {"precision": 0.8520055325034578, "recall": 0.8781183178902352, "f1": 0.8648648648648648, "accuracy": 0.9834707337180544}, "da_ddt": {"precision": 0.7322834645669292, "recall": 0.6241610738255033, "f1": 0.6739130434782609, "accuracy": 0.976154843859124}, "en_ewt": {"precision": 0.7760467380720545, "recall": 0.7325367647058824, "f1": 0.7536643026004729, "accuracy": 0.9758138422919074}, "pt_bosque": {"precision": 0.6792975970425139, "recall": 0.6049382716049383, "f1": 0.6399651719634306, "accuracy": 0.9674322561947544}, "sr_set": {"precision": 0.8651162790697674, "recall": 0.8783943329397875, "f1": 0.8717047451669595, "accuracy": 0.9826635145784082}, "sk_snk": {"precision": 0.654292343387471, "recall": 0.6163934426229508, "f1": 0.634777715250422, "accuracy": 0.9520257537688442}, "sv_talbanken": {"precision": 0.8109452736318408, "recall": 0.8316326530612245, "f1": 0.8211586901763225, "accuracy": 0.9966629042547971}}
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:90701b50f4b917fe1021065442cac41f3b7840e3fb09024c76915bf1e4239040
3
- size 1112921036
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:360b3cdf67db34474d0f73560495b58f1c3ab576204323d8b26923f5fa93e01f
3
+ size 944366708
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1dfb050a791440edd96515d1177ebd40dd7fc24c9169ac1cef5188e040640cf8
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecc396eabbe7cbc981aa7e4804a3c44f5acb4ba9bee40799bad7496c45bb8ac1
3
  size 5304