haryoaw's picture
Upload tokenizer
45f93fc verified
metadata
base_model: haryoaw/scenario-TCR-NER_data-univner_full
library_name: transformers
license: mit
metrics:
  - precision
  - recall
  - f1
  - accuracy
tags:
  - generated_from_trainer
model-index:
  - name: scenario-kd-po-ner-full-mdeberta_data-univner_full66
    results: []

scenario-kd-po-ner-full-mdeberta_data-univner_full66

This model is a fine-tuned version of haryoaw/scenario-TCR-NER_data-univner_full on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 46.8943
  • Precision: 0.8216
  • Recall: 0.8305
  • F1: 0.8260
  • Accuracy: 0.9824

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 66
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
135.9225 0.2911 500 108.8223 0.6029 0.4263 0.4995 0.9530
101.0224 0.5822 1000 94.7779 0.7231 0.7083 0.7156 0.9727
91.1928 0.8732 1500 88.1561 0.7488 0.7565 0.7526 0.9760
85.2395 1.1643 2000 83.2027 0.7757 0.7680 0.7718 0.9777
80.2907 1.4554 2500 79.1339 0.7824 0.7925 0.7874 0.9788
76.5231 1.7465 3000 75.6774 0.7942 0.7846 0.7894 0.9793
73.1464 2.0375 3500 72.6010 0.8048 0.7973 0.8010 0.9801
69.6437 2.3286 4000 69.6290 0.7955 0.8179 0.8066 0.9803
67.0793 2.6197 4500 67.2226 0.8016 0.8085 0.8051 0.9808
64.9103 2.9108 5000 65.1388 0.8012 0.8176 0.8093 0.9807
62.5177 3.2019 5500 63.1765 0.8105 0.8160 0.8133 0.9813
60.6079 3.4929 6000 61.4149 0.8158 0.8129 0.8143 0.9811
58.9252 3.7840 6500 59.9050 0.8118 0.8212 0.8165 0.9810
57.4544 4.0751 7000 58.3757 0.8063 0.8260 0.8161 0.9813
55.9212 4.3662 7500 57.1185 0.8129 0.8254 0.8191 0.9815
54.706 4.6573 8000 55.9905 0.8208 0.8197 0.8202 0.9818
53.5567 4.9483 8500 54.9749 0.8117 0.8259 0.8187 0.9813
52.4084 5.2394 9000 53.9236 0.8158 0.8228 0.8193 0.9815
51.3684 5.5305 9500 52.9420 0.8148 0.8263 0.8205 0.9817
50.5374 5.8216 10000 52.1205 0.8224 0.8209 0.8217 0.9819
49.7012 6.1126 10500 51.3587 0.8195 0.8310 0.8252 0.9820
48.8997 6.4037 11000 50.7199 0.8205 0.8270 0.8237 0.9819
48.3307 6.6948 11500 50.0936 0.8238 0.8215 0.8227 0.9821
47.765 6.9859 12000 49.5167 0.8177 0.8318 0.8247 0.9819
47.1176 7.2770 12500 49.0615 0.8195 0.8305 0.8249 0.9821
46.5727 7.5680 13000 48.6345 0.8176 0.8347 0.8260 0.9820
46.2968 7.8591 13500 48.2124 0.8193 0.8282 0.8237 0.9821
45.8193 8.1502 14000 47.8940 0.8236 0.8285 0.8260 0.9821
45.4871 8.4413 14500 47.5967 0.8171 0.8362 0.8266 0.9819
45.2671 8.7324 15000 47.3633 0.8252 0.8329 0.8290 0.9824
45.0471 9.0234 15500 47.1393 0.8245 0.8280 0.8262 0.9821
44.7971 9.3145 16000 47.0470 0.8234 0.8315 0.8274 0.9822
44.7601 9.6056 16500 46.9315 0.8223 0.8331 0.8276 0.9825
44.647 9.8967 17000 46.8943 0.8216 0.8305 0.8260 0.9824

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1