tiennguyenbnbk's picture
End of training
604c396 verified
metadata
base_model: vinai/phobert-base-v2
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - recall
  - precision
model-index:
  - name: cls-comment-phobert-base-v2-v3.2
    results: []

cls-comment-phobert-base-v2-v3.2

This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4780
  • Accuracy: 0.9383
  • F1 Score: 0.9288
  • Recall: 0.9294
  • Precision: 0.9285

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 4000
  • label_smoothing_factor: 0.05

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Score Recall Precision
1.8639 0.8696 100 1.7088 0.4004 0.0835 0.1438 0.1795
1.5668 1.7391 200 1.3288 0.5800 0.2172 0.2575 0.2674
1.2197 2.6087 300 0.9746 0.7668 0.5366 0.5148 0.5820
0.9384 3.4783 400 0.7674 0.8391 0.6138 0.6267 0.6053
0.7551 4.3478 500 0.6780 0.8527 0.6284 0.6454 0.6147
0.6636 5.2174 600 0.6152 0.8684 0.6833 0.6785 0.7626
0.5767 6.0870 700 0.5487 0.8929 0.7884 0.7698 0.8968
0.5059 6.9565 800 0.5262 0.8986 0.8665 0.8534 0.8880
0.4512 7.8261 900 0.4882 0.9195 0.9002 0.9082 0.8928
0.4098 8.6957 1000 0.4828 0.9212 0.9111 0.9061 0.9183
0.3916 9.5652 1100 0.4685 0.9280 0.9193 0.9140 0.9254
0.373 10.4348 1200 0.4756 0.9239 0.9145 0.9210 0.9100
0.3592 11.3043 1300 0.4597 0.9318 0.9230 0.9203 0.9263
0.3377 12.1739 1400 0.4692 0.9304 0.9181 0.9198 0.9175
0.3299 13.0435 1500 0.4672 0.9329 0.9244 0.9216 0.9292
0.3198 13.9130 1600 0.4619 0.9331 0.9241 0.9225 0.9264
0.3121 14.7826 1700 0.4672 0.9331 0.9243 0.9245 0.9249
0.3053 15.6522 1800 0.4664 0.9345 0.9216 0.9272 0.9167
0.3058 16.5217 1900 0.4655 0.9331 0.9229 0.9221 0.9240
0.2976 17.3913 2000 0.4619 0.9356 0.9259 0.9221 0.9299
0.2975 18.2609 2100 0.4663 0.9342 0.9255 0.9248 0.9267
0.2872 19.1304 2200 0.4737 0.9345 0.9237 0.9194 0.9285
0.2879 20.0 2300 0.4799 0.9318 0.9201 0.9295 0.9116
0.2848 20.8696 2400 0.4843 0.9326 0.9194 0.9309 0.9092
0.2808 21.7391 2500 0.4839 0.9326 0.9243 0.9237 0.9259
0.2798 22.6087 2600 0.4840 0.9342 0.9240 0.9289 0.9197
0.2797 23.4783 2700 0.4770 0.9334 0.9223 0.9246 0.9203
0.2754 24.3478 2800 0.4863 0.9318 0.9225 0.9252 0.9212
0.2752 25.2174 2900 0.4879 0.9326 0.9243 0.9259 0.9238
0.2718 26.0870 3000 0.4788 0.9361 0.9270 0.9244 0.9301
0.2712 26.9565 3100 0.4766 0.9356 0.9253 0.9237 0.9273
0.2714 27.8261 3200 0.4780 0.9383 0.9288 0.9294 0.9285
0.2697 28.6957 3300 0.4857 0.9367 0.9263 0.9286 0.9243
0.2674 29.5652 3400 0.4876 0.9348 0.9235 0.9304 0.9174
0.2681 30.4348 3500 0.4869 0.9361 0.9262 0.9348 0.9184
0.2685 31.3043 3600 0.4931 0.9339 0.9241 0.9279 0.9212
0.2665 32.1739 3700 0.4851 0.9339 0.9234 0.9262 0.9211
0.2703 33.0435 3800 0.4864 0.9367 0.9263 0.9304 0.9226
0.2661 33.9130 3900 0.4849 0.9364 0.9271 0.9319 0.9227
0.2695 34.7826 4000 0.4863 0.9361 0.9269 0.9320 0.9223

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1