haryoaw's picture
Upload tokenizer
5420eff verified
metadata
base_model: haryoaw/scenario-TCR-NER_data-univner_full
library_name: transformers
license: mit
metrics:
  - precision
  - recall
  - f1
  - accuracy
tags:
  - generated_from_trainer
model-index:
  - name: scenario-kd-scr-ner-full-xlmr_data-univner_full44
    results: []

scenario-kd-scr-ner-full-xlmr_data-univner_full44

This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_full on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 158.9407
  • Precision: 0.5487
  • Recall: 0.5432
  • F1: 0.5459
  • Accuracy: 0.9588

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 44
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
460.9453 0.2911 500 385.8351 0.0 0.0 0.0 0.9241
363.4557 0.5822 1000 346.2052 0.4279 0.0544 0.0965 0.9263
332.2012 0.8732 1500 325.9525 0.3466 0.0752 0.1235 0.9270
311.0392 1.1643 2000 305.0690 0.2719 0.1538 0.1965 0.9302
293.6181 1.4554 2500 289.0764 0.2902 0.1616 0.2076 0.9319
278.0759 1.7465 3000 274.3208 0.3437 0.1727 0.2299 0.9351
263.7235 2.0375 3500 261.5710 0.3720 0.2477 0.2974 0.9395
250.8804 2.3286 4000 251.6802 0.4335 0.2371 0.3065 0.9399
241.6486 2.6197 4500 241.4983 0.3763 0.3122 0.3413 0.9431
232.1585 2.9108 5000 234.7022 0.4332 0.2502 0.3172 0.9419
223.428 3.2019 5500 225.8982 0.4283 0.3369 0.3771 0.9458
215.7432 3.4929 6000 218.6250 0.4172 0.3455 0.3780 0.9469
208.5821 3.7840 6500 212.5754 0.4244 0.4311 0.4277 0.9480
202.7347 4.0751 7000 206.2203 0.4440 0.4046 0.4233 0.9502
196.2593 4.3662 7500 200.8361 0.4877 0.4275 0.4556 0.9518
191.1748 4.6573 8000 196.1823 0.4735 0.4281 0.4496 0.9524
186.3328 4.9483 8500 191.5347 0.4679 0.4555 0.4616 0.9531
180.9869 5.2394 9000 187.5859 0.4850 0.4979 0.4914 0.9549
176.8171 5.5305 9500 183.7527 0.4858 0.5217 0.5031 0.9551
173.9635 5.8216 10000 180.3877 0.5310 0.4719 0.4997 0.9553
170.1918 6.1126 10500 177.5785 0.5234 0.4582 0.4887 0.9556
167.0458 6.4037 11000 174.8427 0.5411 0.4620 0.4984 0.9554
164.3432 6.6948 11500 171.9410 0.5348 0.5089 0.5215 0.9570
161.6384 6.9859 12000 169.9951 0.5304 0.5017 0.5156 0.9573
159.3906 7.2770 12500 167.7097 0.5450 0.5037 0.5235 0.9576
157.2167 7.5680 13000 165.9562 0.5248 0.5260 0.5254 0.9578
155.8673 7.8591 13500 164.2853 0.5485 0.5210 0.5344 0.9581
153.9819 8.1502 14000 162.9678 0.5385 0.5161 0.5271 0.9581
152.5708 8.4413 14500 161.7499 0.5424 0.5385 0.5404 0.9583
151.7945 8.7324 15000 160.7884 0.5528 0.5282 0.5402 0.9585
150.7972 9.0234 15500 160.0030 0.5441 0.5455 0.5448 0.9590
149.6132 9.3145 16000 159.5214 0.5446 0.5474 0.5460 0.9585
149.1168 9.6056 16500 159.1108 0.5540 0.5376 0.5457 0.9588
149.234 9.8967 17000 158.9407 0.5487 0.5432 0.5459 0.9588

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1