uttam333's picture
End of training
ed2d99d
|
raw
history blame
7.22 kB
metadata
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: layoutlm-custom_no_text
    results: []

layoutlm-custom_no_text

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1118
  • Noise: {'precision': 0.8832116788321168, 'recall': 0.8832116788321168, 'f1': 0.8832116788321168, 'number': 548}
  • Signal: {'precision': 0.8594890510948905, 'recall': 0.8594890510948905, 'f1': 0.8594890510948904, 'number': 548}
  • Overall Precision: 0.8714
  • Overall Recall: 0.8714
  • Overall F1: 0.8714
  • Overall Accuracy: 0.9773

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Noise Signal Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4739 1.0 18 0.1915 {'precision': 0.6647398843930635, 'recall': 0.6295620437956204, 'f1': 0.6466729147141518, 'number': 548} {'precision': 0.6782273603082851, 'recall': 0.6423357664233577, 'f1': 0.6597938144329897, 'number': 548} 0.6715 0.6359 0.6532 0.9293
0.188 2.0 36 0.1127 {'precision': 0.8265107212475633, 'recall': 0.7737226277372263, 'f1': 0.7992459943449576, 'number': 548} {'precision': 0.7953216374269005, 'recall': 0.7445255474452555, 'f1': 0.769085768143261, 'number': 548} 0.8109 0.7591 0.7842 0.9579
0.1052 3.0 54 0.0889 {'precision': 0.8455743879472694, 'recall': 0.8193430656934306, 'f1': 0.8322520852641334, 'number': 548} {'precision': 0.8248587570621468, 'recall': 0.7992700729927007, 'f1': 0.8118628359592215, 'number': 548} 0.8352 0.8093 0.8221 0.9674
0.0645 4.0 72 0.0766 {'precision': 0.8775510204081632, 'recall': 0.8631386861313869, 'f1': 0.8702851885924563, 'number': 548} {'precision': 0.8552875695732839, 'recall': 0.8412408759124088, 'f1': 0.8482060717571298, 'number': 548} 0.8664 0.8522 0.8592 0.9750
0.0427 5.0 90 0.0914 {'precision': 0.8586956521739131, 'recall': 0.864963503649635, 'f1': 0.8618181818181818, 'number': 548} {'precision': 0.8351449275362319, 'recall': 0.8412408759124088, 'f1': 0.8381818181818181, 'number': 548} 0.8469 0.8531 0.8500 0.9730
0.0283 6.0 108 0.0987 {'precision': 0.8756855575868373, 'recall': 0.8740875912408759, 'f1': 0.8748858447488584, 'number': 548} {'precision': 0.8555758683729433, 'recall': 0.8540145985401459, 'f1': 0.8547945205479452, 'number': 548} 0.8656 0.8641 0.8648 0.9761
0.0205 7.0 126 0.0988 {'precision': 0.8646209386281588, 'recall': 0.8740875912408759, 'f1': 0.8693284936479129, 'number': 548} {'precision': 0.8375451263537906, 'recall': 0.8467153284671532, 'f1': 0.8421052631578947, 'number': 548} 0.8511 0.8604 0.8557 0.9742
0.0141 8.0 144 0.1086 {'precision': 0.8706739526411658, 'recall': 0.8722627737226277, 'f1': 0.8714676390154968, 'number': 548} {'precision': 0.8542805100182149, 'recall': 0.8558394160583942, 'f1': 0.8550592525068369, 'number': 548} 0.8625 0.8641 0.8633 0.9753
0.012 9.0 162 0.1076 {'precision': 0.8811700182815356, 'recall': 0.8795620437956204, 'f1': 0.8803652968036529, 'number': 548} {'precision': 0.8592321755027422, 'recall': 0.8576642335766423, 'f1': 0.8584474885844748, 'number': 548} 0.8702 0.8686 0.8694 0.9773
0.0104 10.0 180 0.1089 {'precision': 0.8788990825688073, 'recall': 0.8740875912408759, 'f1': 0.8764867337602928, 'number': 548} {'precision': 0.8568807339449541, 'recall': 0.8521897810218978, 'f1': 0.8545288197621226, 'number': 548} 0.8679 0.8631 0.8655 0.9764
0.0101 11.0 198 0.1111 {'precision': 0.8813868613138686, 'recall': 0.8813868613138686, 'f1': 0.8813868613138687, 'number': 548} {'precision': 0.8594890510948905, 'recall': 0.8594890510948905, 'f1': 0.8594890510948904, 'number': 548} 0.8704 0.8704 0.8704 0.9761
0.008 12.0 216 0.1049 {'precision': 0.886654478976234, 'recall': 0.885036496350365, 'f1': 0.8858447488584474, 'number': 548} {'precision': 0.8665447897623401, 'recall': 0.864963503649635, 'f1': 0.8657534246575344, 'number': 548} 0.8766 0.875 0.8758 0.9778
0.0072 13.0 234 0.1094 {'precision': 0.8775137111517367, 'recall': 0.8759124087591241, 'f1': 0.8767123287671232, 'number': 548} {'precision': 0.8519195612431444, 'recall': 0.8503649635036497, 'f1': 0.8511415525114155, 'number': 548} 0.8647 0.8631 0.8639 0.9759
0.007 14.0 252 0.1117 {'precision': 0.8777372262773723, 'recall': 0.8777372262773723, 'f1': 0.8777372262773723, 'number': 548} {'precision': 0.8540145985401459, 'recall': 0.8540145985401459, 'f1': 0.8540145985401459, 'number': 548} 0.8659 0.8659 0.8659 0.9764
0.0084 15.0 270 0.1118 {'precision': 0.8832116788321168, 'recall': 0.8832116788321168, 'f1': 0.8832116788321168, 'number': 548} {'precision': 0.8594890510948905, 'recall': 0.8594890510948905, 'f1': 0.8594890510948904, 'number': 548} 0.8714 0.8714 0.8714 0.9773

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0