haryoaw's picture
Initial Commit
17d2363 verified
|
raw
history blame
3.24 kB
metadata
library_name: transformers
license: mit
base_model: haryoaw/scenario-TCR-NER_data-univner_half
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: scenario-kd-scr-ner-full-mdeberta_data-univner_half55
    results: []

scenario-kd-scr-ner-full-mdeberta_data-univner_half55

This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_half on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 369.5634
  • Precision: 0.3741
  • Recall: 0.4149
  • F1: 0.3935
  • Accuracy: 0.9238

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 55
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
638.1208 0.5828 500 570.1356 1.0 0.0004 0.0009 0.9241
540.7434 1.1655 1000 524.2383 0.2886 0.0521 0.0882 0.9253
490.1496 1.7483 1500 493.6944 0.2927 0.1682 0.2137 0.9297
454.2343 2.3310 2000 471.6737 0.3338 0.2411 0.2800 0.9293
429.0546 2.9138 2500 450.0691 0.4515 0.2450 0.3176 0.9362
407.3853 3.4965 3000 436.8550 0.3570 0.2877 0.3186 0.9304
390.9232 4.0793 3500 424.0557 0.3848 0.3245 0.3521 0.9313
376.2085 4.6620 4000 417.5171 0.3349 0.3800 0.3561 0.9216
364.3876 5.2448 4500 404.9495 0.3708 0.3766 0.3737 0.9276
354.1139 5.8275 5000 398.9413 0.3534 0.3877 0.3697 0.9226
344.7845 6.4103 5500 394.6783 0.3273 0.4193 0.3676 0.9143
337.8201 6.9930 6000 382.1873 0.3881 0.3900 0.3891 0.9289
330.94 7.5758 6500 381.2287 0.3480 0.4074 0.3754 0.9188
326.2092 8.1585 7000 372.6259 0.4087 0.3877 0.3979 0.9306
322.0348 8.7413 7500 374.2613 0.3530 0.4144 0.3812 0.9184
319.0297 9.3240 8000 370.3267 0.3774 0.4131 0.3944 0.9239
317.8106 9.9068 8500 369.5634 0.3741 0.4149 0.3935 0.9238

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1