metadata
base_model: haryoaw/scenario-TCR-NER_data-univner_half
library_name: transformers
license: mit
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: scenario-kd-scr-ner-full-xlmr_data-univner_half44
results: []
scenario-kd-scr-ner-full-xlmr_data-univner_half44
This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_half on the None dataset. It achieves the following results on the evaluation set:
- Loss: 238.5785
- Precision: 0.3821
- Recall: 0.2681
- F1: 0.3151
- Accuracy: 0.9380
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 44
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
446.8617 | 0.5828 | 500 | 369.9693 | 0.0 | 0.0 | 0.0 | 0.9241 |
345.828 | 1.1655 | 1000 | 339.4001 | 0.4474 | 0.0074 | 0.0145 | 0.9243 |
316.4135 | 1.7483 | 1500 | 320.2157 | 0.3593 | 0.0778 | 0.1279 | 0.9271 |
295.3783 | 2.3310 | 2000 | 303.7513 | 0.4250 | 0.0740 | 0.1261 | 0.9274 |
278.865 | 2.9138 | 2500 | 291.9803 | 0.3462 | 0.1311 | 0.1902 | 0.9310 |
265.0536 | 3.4965 | 3000 | 282.1734 | 0.3378 | 0.1561 | 0.2135 | 0.9319 |
252.7824 | 4.0793 | 3500 | 274.9576 | 0.3486 | 0.2065 | 0.2593 | 0.9342 |
243.1838 | 4.6620 | 4000 | 265.2098 | 0.3825 | 0.1775 | 0.2424 | 0.9352 |
235.3429 | 5.2448 | 4500 | 260.4372 | 0.3720 | 0.2352 | 0.2882 | 0.9369 |
227.8851 | 5.8275 | 5000 | 256.3334 | 0.3570 | 0.2534 | 0.2964 | 0.9365 |
221.9237 | 6.4103 | 5500 | 250.2192 | 0.3931 | 0.2375 | 0.2961 | 0.9383 |
217.836 | 6.9930 | 6000 | 245.7306 | 0.3999 | 0.2268 | 0.2894 | 0.9385 |
213.3779 | 7.5758 | 6500 | 242.3217 | 0.3961 | 0.2378 | 0.2972 | 0.9391 |
209.3609 | 8.1585 | 7000 | 241.0757 | 0.3846 | 0.2448 | 0.2992 | 0.9381 |
207.6172 | 8.7413 | 7500 | 239.1901 | 0.3905 | 0.2535 | 0.3074 | 0.9391 |
205.3707 | 9.3240 | 8000 | 239.5822 | 0.3728 | 0.2759 | 0.3171 | 0.9378 |
204.5786 | 9.9068 | 8500 | 238.5785 | 0.3821 | 0.2681 | 0.3151 | 0.9380 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.19.1