anuragsingh28's picture
End of training
0e8c6c5
|
raw
history blame
5.91 kB
metadata
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv2-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: layoutlmv2-base-finetuned_docvqa
    results: []

layoutlmv2-base-finetuned_docvqa

This model is a fine-tuned version of microsoft/layoutlmv2-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 4.6228

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
5.2897 0.22 50 4.5700
4.4152 0.44 100 4.4259
4.1658 0.66 150 3.7699
3.7752 0.88 200 3.5137
3.4951 1.11 250 3.2858
3.0566 1.33 300 3.1827
2.9219 1.55 350 3.0013
2.6689 1.77 400 2.8707
2.5179 1.99 450 2.8395
2.0212 2.21 500 2.5494
1.9111 2.43 550 2.4910
1.897 2.65 600 2.3390
1.7719 2.88 650 2.0315
1.3732 3.1 700 2.6837
1.3554 3.32 750 2.7709
1.2142 3.54 800 2.6627
1.1214 3.76 850 2.6243
1.1927 3.98 900 2.4302
0.8986 4.2 950 2.5084
0.8782 4.42 1000 2.6108
0.9384 4.65 1050 2.5729
0.9106 4.87 1100 2.8295
0.7983 5.09 1150 3.3324
0.7627 5.31 1200 3.0111
0.8101 5.53 1250 3.0340
0.8363 5.75 1300 2.6495
0.8682 5.97 1350 3.0019
0.636 6.19 1400 3.2153
0.5614 6.42 1450 2.9601
0.664 6.64 1500 3.1723
0.6052 6.86 1550 3.8548
0.5859 7.08 1600 3.2841
0.5383 7.3 1650 3.2616
0.3317 7.52 1700 3.6498
0.5176 7.74 1750 3.1792
0.323 7.96 1800 3.9586
0.2828 8.19 1850 3.3414
0.3408 8.41 1900 3.4490
0.4341 8.63 1950 3.6120
0.4256 8.85 2000 3.6485
0.2488 9.07 2050 3.2907
0.2399 9.29 2100 3.9223
0.3902 9.51 2150 3.4605
0.1764 9.73 2200 3.4834
0.3641 9.96 2250 3.6385
0.0802 10.18 2300 4.1041
0.1922 10.4 2350 4.0973
0.1943 10.62 2400 3.8264
0.1944 10.84 2450 4.0448
0.1396 11.06 2500 4.0736
0.1399 11.28 2550 4.1645
0.1739 11.5 2600 4.0905
0.0859 11.73 2650 4.2965
0.1819 11.95 2700 3.9382
0.1614 12.17 2750 4.2804
0.1406 12.39 2800 4.4033
0.1474 12.61 2850 4.3500
0.0857 12.83 2900 4.6170
0.1197 13.05 2950 4.0885
0.1087 13.27 3000 4.1931
0.0654 13.5 3050 4.4273
0.1081 13.72 3100 4.3433
0.2075 13.94 3150 4.1598
0.0807 14.16 3200 4.1951
0.0338 14.38 3250 4.2540
0.0918 14.6 3300 4.4138
0.1703 14.82 3350 3.9894
0.0176 15.04 3400 4.3193
0.1292 15.27 3450 4.4866
0.0484 15.49 3500 4.2460
0.0703 15.71 3550 4.2828
0.1076 15.93 3600 4.4895
0.0245 16.15 3650 4.5421
0.0779 16.37 3700 4.5335
0.0553 16.59 3750 4.5308
0.0626 16.81 3800 4.4731
0.0175 17.04 3850 4.4889
0.0038 17.26 3900 4.4956
0.0074 17.48 3950 4.6014
0.0761 17.7 4000 4.5396
0.0095 17.92 4050 4.5511
0.0634 18.14 4100 4.5970
0.0043 18.36 4150 4.6040
0.0863 18.58 4200 4.6277
0.02 18.81 4250 4.5889
0.0176 19.03 4300 4.6318
0.0062 19.25 4350 4.6496
0.008 19.47 4400 4.6139
0.0035 19.69 4450 4.6159
0.0137 19.91 4500 4.6228

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.4
  • Tokenizers 0.13.3