metadata
base_model: haryoaw/scenario-TCR-NER_data-univner_full
library_name: transformers
license: mit
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: scenario-kd-scr-ner-full-mdeberta_data-univner_full55
results: []
scenario-kd-scr-ner-full-mdeberta_data-univner_full55
This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 182.5497
- Precision: 0.6804
- Recall: 0.6154
- F1: 0.6463
- Accuracy: 0.9657
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 55
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
627.6279 | 0.2911 | 500 | 560.3030 | 0.0 | 0.0 | 0.0 | 0.9241 |
529.0723 | 0.5822 | 1000 | 503.1468 | 0.3145 | 0.0378 | 0.0675 | 0.9253 |
481.2237 | 0.8732 | 1500 | 462.0725 | 0.3110 | 0.0811 | 0.1286 | 0.9284 |
443.6023 | 1.1643 | 2000 | 431.7476 | 0.4241 | 0.0822 | 0.1378 | 0.9304 |
413.7332 | 1.4554 | 2500 | 404.1758 | 0.4897 | 0.3199 | 0.3870 | 0.9448 |
389.6206 | 1.7465 | 3000 | 381.5862 | 0.5329 | 0.3800 | 0.4437 | 0.9494 |
368.3142 | 2.0375 | 3500 | 363.5359 | 0.5888 | 0.3769 | 0.4596 | 0.9507 |
349.4665 | 2.3286 | 4000 | 346.6397 | 0.5410 | 0.4793 | 0.5083 | 0.9539 |
333.3893 | 2.6197 | 4500 | 331.5422 | 0.6223 | 0.4291 | 0.5079 | 0.9550 |
318.8641 | 2.9108 | 5000 | 316.8669 | 0.5984 | 0.5612 | 0.5792 | 0.9597 |
303.8825 | 3.2019 | 5500 | 303.3675 | 0.6190 | 0.5569 | 0.5863 | 0.9608 |
290.8802 | 3.4929 | 6000 | 291.3924 | 0.6347 | 0.5390 | 0.5830 | 0.9606 |
279.9562 | 3.7840 | 6500 | 281.3740 | 0.6484 | 0.5403 | 0.5894 | 0.9613 |
268.853 | 4.0751 | 7000 | 270.4638 | 0.6513 | 0.5578 | 0.6009 | 0.9615 |
257.9733 | 4.3662 | 7500 | 260.5476 | 0.6536 | 0.5817 | 0.6156 | 0.9635 |
248.9305 | 4.6573 | 8000 | 251.8452 | 0.6631 | 0.5926 | 0.6258 | 0.9638 |
240.7242 | 4.9483 | 8500 | 243.8925 | 0.6587 | 0.5882 | 0.6215 | 0.9633 |
232.3709 | 5.2394 | 9000 | 236.3189 | 0.6514 | 0.6077 | 0.6288 | 0.9640 |
224.6698 | 5.5305 | 9500 | 229.3991 | 0.6675 | 0.5722 | 0.6162 | 0.9629 |
218.3664 | 5.8216 | 10000 | 223.3077 | 0.6788 | 0.5823 | 0.6269 | 0.9639 |
212.9249 | 6.1126 | 10500 | 217.2704 | 0.6717 | 0.6003 | 0.6340 | 0.9643 |
206.6058 | 6.4037 | 11000 | 211.8754 | 0.6570 | 0.6226 | 0.6393 | 0.9649 |
201.722 | 6.6948 | 11500 | 207.1151 | 0.6680 | 0.6210 | 0.6436 | 0.9650 |
197.034 | 6.9859 | 12000 | 202.9470 | 0.6805 | 0.6047 | 0.6403 | 0.9649 |
192.5555 | 7.2770 | 12500 | 199.1373 | 0.6749 | 0.6130 | 0.6425 | 0.9651 |
189.1607 | 7.5680 | 13000 | 195.9332 | 0.6605 | 0.6279 | 0.6438 | 0.9652 |
186.1884 | 7.8591 | 13500 | 193.1577 | 0.6772 | 0.6057 | 0.6395 | 0.9652 |
183.2947 | 8.1502 | 14000 | 190.2176 | 0.6697 | 0.6318 | 0.6502 | 0.9654 |
180.5764 | 8.4413 | 14500 | 187.9859 | 0.6970 | 0.6091 | 0.6501 | 0.9657 |
178.5341 | 8.7324 | 15000 | 186.4189 | 0.6843 | 0.5976 | 0.6380 | 0.9645 |
176.79 | 9.0234 | 15500 | 184.5720 | 0.6846 | 0.6198 | 0.6506 | 0.9661 |
175.528 | 9.3145 | 16000 | 183.8221 | 0.7059 | 0.5905 | 0.6431 | 0.9650 |
174.3179 | 9.6056 | 16500 | 182.7365 | 0.6842 | 0.6188 | 0.6498 | 0.9658 |
174.2736 | 9.8967 | 17000 | 182.5497 | 0.6804 | 0.6154 | 0.6463 | 0.9657 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1