haryoaw's picture
Initial Commit
704bbc9 verified
|
raw
history blame
3.76 kB
metadata
library_name: transformers
license: mit
base_model: haryoaw/scenario-TCR-NER_data-univner_en
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: scenario-kd-po-ner-full_data-univner_full66
    results: []

scenario-kd-po-ner-full_data-univner_full66

This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5267
  • Precision: 0.7744
  • Recall: 0.7391
  • F1: 0.7564
  • Accuracy: 0.9807

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 66
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.8089 1.2755 500 0.7185 0.7338 0.6791 0.7054 0.9767
0.4626 2.5510 1000 0.6447 0.7127 0.7319 0.7222 0.9787
0.3791 3.8265 1500 0.5975 0.7349 0.7288 0.7318 0.9794
0.3262 5.1020 2000 0.5889 0.7447 0.7277 0.7361 0.9797
0.2868 6.3776 2500 0.5714 0.7427 0.7381 0.7404 0.9799
0.2587 7.6531 3000 0.5688 0.7703 0.7257 0.7473 0.9807
0.2389 8.9286 3500 0.5610 0.7338 0.7246 0.7292 0.9791
0.2211 10.2041 4000 0.5571 0.7719 0.7495 0.7605 0.9800
0.2022 11.4796 4500 0.5692 0.776 0.7029 0.7376 0.9799
0.1903 12.7551 5000 0.5554 0.7711 0.7360 0.7532 0.9804
0.179 14.0306 5500 0.5411 0.7574 0.7371 0.7471 0.9803
0.1688 15.3061 6000 0.5353 0.7602 0.7516 0.7559 0.9804
0.1608 16.5816 6500 0.5383 0.7748 0.7267 0.75 0.9802
0.1552 17.8571 7000 0.5223 0.7716 0.7381 0.7545 0.9800
0.1489 19.1327 7500 0.5300 0.7721 0.7329 0.7520 0.9801
0.1439 20.4082 8000 0.5321 0.7634 0.7246 0.7435 0.9797
0.1391 21.6837 8500 0.5204 0.7798 0.7443 0.7617 0.9805
0.1351 22.9592 9000 0.5251 0.7489 0.7350 0.7419 0.9800
0.131 24.2347 9500 0.5164 0.7664 0.7505 0.7584 0.9808
0.1291 25.5102 10000 0.5216 0.7614 0.7236 0.7420 0.9798
0.1276 26.7857 10500 0.5257 0.7739 0.7371 0.7550 0.9804
0.1251 28.0612 11000 0.5156 0.7692 0.7453 0.7571 0.9808
0.1241 29.3367 11500 0.5267 0.7744 0.7391 0.7564 0.9807

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1