Edit model card

PhobertLexicalHostingMeta-v11_14_2024

This model is a fine-tuned version of vinai/phobert-base-v2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3926
  • Accuracy: 0.9107
  • F1: 0.7086

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 0.6211 100 1.9279 0.7977 0.5812
1.1694 1.2422 200 0.2518 0.9227 0.7147
1.1694 1.8634 300 0.3123 0.8991 0.6752
0.9066 2.4845 400 0.3268 0.8905 0.6802
0.7239 3.1056 500 0.2673 0.9075 0.7024
0.7239 3.7267 600 0.2586 0.9104 0.6931
0.6778 4.3478 700 0.5058 0.8618 0.6411
0.6778 4.9689 800 0.2765 0.9104 0.6989
0.8308 5.5901 900 0.2210 0.9286 0.7346
0.6671 6.2112 1000 0.2969 0.9142 0.7044
0.6671 6.8323 1100 0.2703 0.9248 0.7257
0.6693 7.4534 1200 1.1115 0.8763 0.6646
0.4595 8.0745 1300 0.3775 0.8763 0.6575
0.4595 8.6957 1400 0.3518 0.9066 0.7050
0.4777 9.3168 1500 0.3372 0.9107 0.7032
0.4777 9.9379 1600 0.3355 0.9079 0.7044
0.5556 10.5590 1700 0.3375 0.9210 0.7281
0.4535 11.1801 1800 0.4279 0.8937 0.6906
0.4535 11.8012 1900 0.3980 0.9059 0.7012
0.3419 12.4224 2000 0.3628 0.9107 0.7121
0.5153 13.0435 2100 0.3750 0.9082 0.7049
0.5153 13.6646 2200 0.4046 0.9057 0.7055
0.5357 14.2857 2300 0.3281 0.9275 0.7314
0.5357 14.9068 2400 0.4413 0.8892 0.6879
0.4949 15.5280 2500 0.4054 0.8940 0.6817
0.4613 16.1491 2600 0.3914 0.9076 0.7035
0.4613 16.7702 2700 0.3740 0.9137 0.7118
0.3909 17.3913 2800 0.4084 0.9068 0.7030
0.2968 18.0124 2900 0.3745 0.9130 0.7120
0.2968 18.6335 3000 0.4066 0.9059 0.7047
0.3972 19.2547 3100 0.3913 0.9107 0.7086
0.3972 19.8758 3200 0.3926 0.9107 0.7086

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Safetensors
Model size
269M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for nompahm/PhobertLexicalHostingMeta-v11_14_2024

Finetuned
(184)
this model