haryoaw's picture
Upload tokenizer
577a2f7 verified
|
raw
history blame
4.81 kB
metadata
base_model: FacebookAI/xlm-roberta-base
library_name: transformers
license: mit
metrics:
  - precision
  - recall
  - f1
  - accuracy
tags:
  - generated_from_trainer
model-index:
  - name: scenario-kd-pre-ner-full-xlmr_data-univner_full44
    results: []

scenario-kd-pre-ner-full-xlmr_data-univner_full44

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4609
  • Precision: 0.8151
  • Recall: 0.8199
  • F1: 0.8175
  • Accuracy: 0.9812

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 32
  • seed: 44
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
1.4889 0.2911 500 0.8586 0.6595 0.7179 0.6875 0.9708
0.756 0.5822 1000 0.7200 0.7071 0.7746 0.7393 0.9739
0.6626 0.8732 1500 0.6671 0.7417 0.7530 0.7473 0.9760
0.5626 1.1643 2000 0.6205 0.7676 0.7785 0.7730 0.9772
0.514 1.4554 2500 0.6545 0.8094 0.7420 0.7743 0.9774
0.4905 1.7465 3000 0.5754 0.7770 0.7869 0.7819 0.9782
0.461 2.0375 3500 0.5559 0.7697 0.8101 0.7894 0.9790
0.4097 2.3286 4000 0.5613 0.7862 0.7836 0.7849 0.9785
0.3973 2.6197 4500 0.5514 0.7850 0.8003 0.7926 0.9795
0.3878 2.9108 5000 0.5299 0.7913 0.8039 0.7975 0.9791
0.3579 3.2019 5500 0.5424 0.8023 0.7852 0.7936 0.9790
0.3434 3.4929 6000 0.5077 0.7881 0.8085 0.7982 0.9795
0.3362 3.7840 6500 0.5244 0.8012 0.7943 0.7977 0.9793
0.3243 4.0751 7000 0.5158 0.8068 0.8108 0.8088 0.9801
0.3134 4.3662 7500 0.5081 0.8001 0.8137 0.8069 0.9799
0.3027 4.6573 8000 0.4989 0.8003 0.8169 0.8085 0.9803
0.2977 4.9483 8500 0.4926 0.8013 0.8121 0.8067 0.9804
0.2822 5.2394 9000 0.4905 0.8052 0.8081 0.8067 0.9801
0.2773 5.5305 9500 0.4864 0.8012 0.8049 0.8031 0.9798
0.2803 5.8216 10000 0.4883 0.7963 0.8090 0.8026 0.9798
0.2717 6.1126 10500 0.4941 0.8169 0.7909 0.8037 0.9798
0.258 6.4037 11000 0.4842 0.8008 0.8078 0.8043 0.9802
0.2572 6.6948 11500 0.4760 0.8129 0.8097 0.8113 0.9805
0.2553 6.9859 12000 0.4742 0.8119 0.8116 0.8117 0.9809
0.2462 7.2770 12500 0.4791 0.8116 0.8054 0.8085 0.9806
0.2447 7.5680 13000 0.4750 0.8017 0.8171 0.8093 0.9804
0.2463 7.8591 13500 0.4657 0.8179 0.8113 0.8146 0.9811
0.2381 8.1502 14000 0.4677 0.8025 0.8153 0.8088 0.9805
0.2357 8.4413 14500 0.4658 0.8135 0.8184 0.8159 0.9810
0.2333 8.7324 15000 0.4638 0.8144 0.8116 0.8130 0.9807
0.234 9.0234 15500 0.4605 0.8126 0.8165 0.8145 0.9810
0.2297 9.3145 16000 0.4670 0.8116 0.8080 0.8098 0.9808
0.2258 9.6056 16500 0.4651 0.8095 0.8142 0.8118 0.9808
0.2272 9.8967 17000 0.4609 0.8151 0.8199 0.8175 0.9812

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.19.1