longformer-simple / README.md
Theoreticallyhugo's picture
trainer: training complete at 2024-02-19 18:47:19.333329.
697183c verified
|
raw
history blame
7.65 kB
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- essays_su_g
metrics:
- accuracy
model-index:
- name: longformer-simple
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: essays_su_g
type: essays_su_g
config: simple
split: test
args: simple
metrics:
- name: Accuracy
type: accuracy
value: 0.8340320326776308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer-simple
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4397
- Claim: {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0}
- Majorclaim: {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0}
- O: {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0}
- Premise: {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0}
- Accuracy: 0.8340
- Macro avg: {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0}
- Weighted avg: {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 41 | 0.5888 | {'precision': 0.49844559585492226, 'recall': 0.2262464722483537, 'f1-score': 0.311226140407635, 'support': 4252.0} | {'precision': 0.6139372822299651, 'recall': 0.40375802016498624, 'f1-score': 0.4871440420237766, 'support': 2182.0} | {'precision': 0.8171685569026202, 'recall': 0.9011320754716982, 'f1-score': 0.8570989078603293, 'support': 9275.0} | {'precision': 0.7903744062587315, 'recall': 0.9274590163934426, 'f1-score': 0.8534469754110725, 'support': 12200.0} | 0.7709 | {'precision': 0.6799814603115598, 'recall': 0.6146488960696201, 'f1-score': 0.6272290164257033, 'support': 27909.0} | {'precision': 0.7410085615761669, 'recall': 0.7709341072772224, 'f1-score': 0.7434134981235008, 'support': 27909.0} |
| No log | 2.0 | 82 | 0.4676 | {'precision': 0.574496644295302, 'recall': 0.5032925682031985, 'f1-score': 0.5365425598595963, 'support': 4252.0} | {'precision': 0.6832784184514004, 'recall': 0.7603116406966086, 'f1-score': 0.7197396963123645, 'support': 2182.0} | {'precision': 0.9165271733065506, 'recall': 0.8854986522911051, 'f1-score': 0.9007457775828033, 'support': 9275.0} | {'precision': 0.8488472059398202, 'recall': 0.8902459016393443, 'f1-score': 0.8690538107621524, 'support': 12200.0} | 0.8196 | {'precision': 0.7557873604982683, 'recall': 0.7598371907075642, 'f1-score': 0.7565204611292291, 'support': 27909.0} | {'precision': 0.816596749632328, 'recall': 0.8195564154932101, 'f1-score': 0.8172533792058241, 'support': 27909.0} |
| No log | 3.0 | 123 | 0.4384 | {'precision': 0.6117381489841986, 'recall': 0.44614299153339604, 'f1-score': 0.5159798721610226, 'support': 4252.0} | {'precision': 0.7290375877736472, 'recall': 0.8088909257561869, 'f1-score': 0.7668911579404737, 'support': 2182.0} | {'precision': 0.9303112313937754, 'recall': 0.889487870619946, 'f1-score': 0.9094416579397012, 'support': 9275.0} | {'precision': 0.8289074635697906, 'recall': 0.9185245901639344, 'f1-score': 0.8714180178078463, 'support': 12200.0} | 0.8283 | {'precision': 0.774998607930353, 'recall': 0.7657615945183658, 'f1-score': 0.7659326764622609, 'support': 27909.0} | {'precision': 0.8217126501390813, 'recall': 0.8283349457164355, 'f1-score': 0.8217304137626299, 'support': 27909.0} |
| No log | 4.0 | 164 | 0.4487 | {'precision': 0.5776205218929678, 'recall': 0.6142991533396049, 'f1-score': 0.5953954866651471, 'support': 4252.0} | {'precision': 0.7034400948991696, 'recall': 0.8153070577451879, 'f1-score': 0.7552536616429633, 'support': 2182.0} | {'precision': 0.9331742243436754, 'recall': 0.8852830188679245, 'f1-score': 0.9085979860573199, 'support': 9275.0} | {'precision': 0.8791773778920309, 'recall': 0.8690163934426229, 'f1-score': 0.8740673564450308, 'support': 12200.0} | 0.8314 | {'precision': 0.7733530547569609, 'recall': 0.795976405848835, 'f1-score': 0.7833286227026153, 'support': 27909.0} | {'precision': 0.837439667749803, 'recall': 0.8314163889784657, 'f1-score': 0.8337974548825171, 'support': 27909.0} |
| No log | 5.0 | 205 | 0.4397 | {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0} | {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0} | {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0} | {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0} | 0.8340 | {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0} | {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0} |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2