Edit model card

bert-base-multilingual-uncased-finetuned-hp

This model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4747

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss
2.6826 1.0 66 2.4979
2.4316 2.0 132 2.2737
2.2563 3.0 198 2.1285
2.1158 4.0 264 2.0855
2.0196 5.0 330 1.9913
1.9651 6.0 396 1.9124
1.8956 7.0 462 1.9461
1.856 8.0 528 1.9519
1.8146 9.0 594 1.8063
1.7675 10.0 660 1.8229
1.7473 11.0 726 1.8271
1.7069 12.0 792 1.8147
1.678 13.0 858 1.8209
1.6291 14.0 924 1.7894
1.6251 15.0 990 1.7851
1.5939 16.0 1056 1.7237
1.5635 17.0 1122 1.7627
1.5088 18.0 1188 1.7392
1.5136 19.0 1254 1.6703
1.4782 20.0 1320 1.6922
1.4714 21.0 1386 1.6810
1.4451 22.0 1452 1.6318
1.4399 23.0 1518 1.6579
1.4262 24.0 1584 1.6303
1.3736 25.0 1650 1.6777
1.3896 26.0 1716 1.6887
1.3508 27.0 1782 1.6420
1.3939 28.0 1848 1.6026
1.3334 29.0 1914 1.6820
1.3169 30.0 1980 1.6018
1.2857 31.0 2046 1.6437
1.3152 32.0 2112 1.5975
1.2835 33.0 2178 1.6155
1.2388 34.0 2244 1.6033
1.2591 35.0 2310 1.6100
1.246 36.0 2376 1.5424
1.2196 37.0 2442 1.6082
1.2341 38.0 2508 1.4954
1.196 39.0 2574 1.5132
1.2025 40.0 2640 1.5710
1.1778 41.0 2706 1.5356
1.1645 42.0 2772 1.5545
1.1729 43.0 2838 1.4637
1.1542 44.0 2904 1.5700
1.1505 45.0 2970 1.5840
1.1224 46.0 3036 1.5451
1.118 47.0 3102 1.6250
1.0997 48.0 3168 1.4816
1.109 49.0 3234 1.5641
1.1243 50.0 3300 1.5099
1.1159 51.0 3366 1.5584
1.0955 52.0 3432 1.5589
1.0557 53.0 3498 1.5601
1.0685 54.0 3564 1.5211
1.0674 55.0 3630 1.5651
1.0625 56.0 3696 1.5918
1.0403 57.0 3762 1.5817
1.0197 58.0 3828 1.4936
1.031 59.0 3894 1.5860
1.0364 60.0 3960 1.5774
1.0375 61.0 4026 1.5358
1.0127 62.0 4092 1.5451
0.9982 63.0 4158 1.5271
0.9949 64.0 4224 1.5242
0.9962 65.0 4290 1.5564
1.0084 66.0 4356 1.5668
1.0076 67.0 4422 1.5867
0.9892 68.0 4488 1.5238
0.9889 69.0 4554 1.5097
0.9932 70.0 4620 1.5491
0.9815 71.0 4686 1.4237
0.9777 72.0 4752 1.5569
0.9525 73.0 4818 1.5103
0.9607 74.0 4884 1.5665
0.9488 75.0 4950 1.5910
0.9492 76.0 5016 1.5779
0.9667 77.0 5082 1.5581
0.9495 78.0 5148 1.4898
0.9457 79.0 5214 1.5638
0.9511 80.0 5280 1.5825
0.9173 81.0 5346 1.5565
0.9209 82.0 5412 1.5550
0.9446 83.0 5478 1.5431
0.9479 84.0 5544 1.4988
0.8999 85.0 5610 1.5311
0.9096 86.0 5676 1.5187
0.9044 87.0 5742 1.4801
0.8849 88.0 5808 1.5176
0.9006 89.0 5874 1.5871
0.9038 90.0 5940 1.4757
0.9026 91.0 6006 1.5138
0.8935 92.0 6072 1.6166
0.8812 93.0 6138 1.5561
0.8725 94.0 6204 1.5315
0.8941 95.0 6270 1.5468
0.9013 96.0 6336 1.6091
0.9232 97.0 6402 1.5199
0.8891 98.0 6468 1.5210
0.8956 99.0 6534 1.4859
0.8876 100.0 6600 1.4610

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.5
  • Tokenizers 0.14.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for rman-rahimi-29/bert-base-multilingual-uncased-finetuned-hp

Finetuned
(390)
this model