longformer-simple / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-02-19 21:00:31.064789.
141f598 verified
|
raw
history blame
9.18 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-simple
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: simple
          split: test
          args: simple
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8374001218245011

longformer-simple

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4624
  • Claim: {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0}
  • Majorclaim: {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0}
  • O: {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0}
  • Premise: {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0}
  • Accuracy: 0.8374
  • Macro avg: {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0}
  • Weighted avg: {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 7

Training results

Training Loss Epoch Step Validation Loss Claim Majorclaim O Premise Accuracy Macro avg Weighted avg
No log 1.0 41 0.5869 {'precision': 0.50130958617077, 'recall': 0.2250705550329257, 'f1-score': 0.310663853270573, 'support': 4252.0} {'precision': 0.6411985018726591, 'recall': 0.3923006416131989, 'f1-score': 0.48677850440716514, 'support': 2182.0} {'precision': 0.8172767203513909, 'recall': 0.9027493261455526, 'f1-score': 0.8578893442622951, 'support': 9275.0} {'precision': 0.7879334257975035, 'recall': 0.931311475409836, 'f1-score': 0.8536438767843726, 'support': 12200.0} 0.7721 {'precision': 0.6869295585480809, 'recall': 0.6128579995503783, 'f1-score': 0.6272438946811014, 'support': 27909.0} {'precision': 0.7425451598936884, 'recall': 0.7720806908165825, 'f1-score': 0.7436480119504476, 'support': 27909.0}
No log 2.0 82 0.4605 {'precision': 0.5800683670786222, 'recall': 0.5188146754468486, 'f1-score': 0.5477343265052762, 'support': 4252.0} {'precision': 0.679080824088748, 'recall': 0.7855178735105408, 'f1-score': 0.7284317892052699, 'support': 2182.0} {'precision': 0.9250369696280286, 'recall': 0.8767654986522911, 'f1-score': 0.900254621941769, 'support': 9275.0} {'precision': 0.8497380970995231, 'recall': 0.8909016393442623, 'f1-score': 0.8698331399303749, 'support': 12200.0} 0.8213 {'precision': 0.7584810644737304, 'recall': 0.7679999217384857, 'f1-score': 0.7615634693956725, 'support': 27909.0} {'precision': 0.8203349361458346, 'recall': 0.8212762908022502, 'f1-score': 0.8198154876923865, 'support': 27909.0}
No log 3.0 123 0.4587 {'precision': 0.6081277213352685, 'recall': 0.39416745061147695, 'f1-score': 0.478310502283105, 'support': 4252.0} {'precision': 0.7005473025801408, 'recall': 0.8212648945921174, 'f1-score': 0.7561181434599156, 'support': 2182.0} {'precision': 0.9445551517993201, 'recall': 0.8687870619946092, 'f1-score': 0.905088172526115, 'support': 9275.0} {'precision': 0.8125, 'recall': 0.9366393442622951, 'f1-score': 0.8701644837039293, 'support': 12200.0} 0.8224 {'precision': 0.7664325439286823, 'recall': 0.7552146878651247, 'f1-score': 0.7524203254932662, 'support': 27909.0} {'precision': 0.8164965537384401, 'recall': 0.8224228743416102, 'f1-score': 0.8131543783763286, 'support': 27909.0}
No log 4.0 164 0.4491 {'precision': 0.5829145728643216, 'recall': 0.6274694261523989, 'f1-score': 0.6043719560539133, 'support': 4252.0} {'precision': 0.7112758486149044, 'recall': 0.8354720439963337, 'f1-score': 0.7683877766069548, 'support': 2182.0} {'precision': 0.9357652656621729, 'recall': 0.8905660377358491, 'f1-score': 0.9126063418406806, 'support': 9275.0} {'precision': 0.881426896667225, 'recall': 0.8627868852459016, 'f1-score': 0.8720072901996521, 'support': 12200.0} 0.8340 {'precision': 0.7778456459521561, 'recall': 0.8040735982826209, 'f1-score': 0.7893433411753001, 'support': 27909.0} {'precision': 0.8407032729174679, 'recall': 0.8340320326776308, 'f1-score': 0.8366234708053201, 'support': 27909.0}
No log 5.0 205 0.4611 {'precision': 0.5805860805860806, 'recall': 0.5964252116650988, 'f1-score': 0.588399071925754, 'support': 4252.0} {'precision': 0.7489102005231038, 'recall': 0.7873510540788268, 'f1-score': 0.7676496872207329, 'support': 2182.0} {'precision': 0.9323308270676691, 'recall': 0.8957412398921832, 'f1-score': 0.9136698559331353, 'support': 9275.0} {'precision': 0.8673800259403373, 'recall': 0.8770491803278688, 'f1-score': 0.8721878056732963, 'support': 12200.0} 0.8335 {'precision': 0.7823017835292977, 'recall': 0.7891416714909945, 'f1-score': 0.7854766051882296, 'support': 27909.0} {'precision': 0.8360091300196414, 'recall': 0.8334945716435559, 'f1-score': 0.8345646069131102, 'support': 27909.0}
No log 6.0 246 0.4642 {'precision': 0.5962333486449242, 'recall': 0.6105362182502352, 'f1-score': 0.6033000232396003, 'support': 4252.0} {'precision': 0.7385488447507094, 'recall': 0.8350137488542622, 'f1-score': 0.783824478382448, 'support': 2182.0} {'precision': 0.9409678526484384, 'recall': 0.8867924528301887, 'f1-score': 0.9130772646536413, 'support': 9275.0} {'precision': 0.8715477443913501, 'recall': 0.8820491803278688, 'f1-score': 0.8767670183729173, 'support': 12200.0} 0.8386 {'precision': 0.7868244476088555, 'recall': 0.8035979000656387, 'f1-score': 0.7942421961621517, 'support': 27909.0} {'precision': 0.8422751475356695, 'recall': 0.8385825360994661, 'f1-score': 0.8399041873394746, 'support': 27909.0}
No log 7.0 287 0.4624 {'precision': 0.5906025179856115, 'recall': 0.617826904985889, 'f1-score': 0.6039080459770114, 'support': 4252.0} {'precision': 0.7631810193321616, 'recall': 0.7960586617781852, 'f1-score': 0.7792732166890982, 'support': 2182.0} {'precision': 0.9296403841858387, 'recall': 0.897466307277628, 'f1-score': 0.913270064183444, 'support': 9275.0} {'precision': 0.8734363502575423, 'recall': 0.875655737704918, 'f1-score': 0.8745446359133887, 'support': 12200.0} 0.8374 {'precision': 0.7892150679402886, 'recall': 0.7967519029366551, 'f1-score': 0.7927489906907357, 'support': 27909.0} {'precision': 0.8404042039171331, 'recall': 0.8374001218245011, 'f1-score': 0.8387335832080923, 'support': 27909.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2