Theoreticallyhugo
commited on
trainer: training complete at 2024-02-19 18:47:19.333329.
Browse files- README.md +16 -16
- model.safetensors +1 -1
README.md
CHANGED
@@ -22,7 +22,7 @@ model-index:
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,14 +32,14 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
- Claim: {'precision': 0.
|
37 |
-
- Majorclaim: {'precision': 0.
|
38 |
-
- O: {'precision': 0.
|
39 |
-
- Premise: {'precision': 0.
|
40 |
-
- Accuracy: 0.
|
41 |
-
- Macro avg: {'precision': 0.
|
42 |
-
- Weighted avg: {'precision': 0.
|
43 |
|
44 |
## Model description
|
45 |
|
@@ -68,13 +68,13 @@ The following hyperparameters were used during training:
|
|
68 |
|
69 |
### Training results
|
70 |
|
71 |
-
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim
|
72 |
-
|
73 |
-
| No log | 1.0 | 41 | 0.
|
74 |
-
| No log | 2.0 | 82 | 0.
|
75 |
-
| No log | 3.0 | 123 | 0.
|
76 |
-
| No log | 4.0 | 164 | 0.
|
77 |
-
| No log | 5.0 | 205 | 0.
|
78 |
|
79 |
|
80 |
### Framework versions
|
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.8340320326776308
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the essays_su_g dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.4397
|
36 |
+
- Claim: {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0}
|
37 |
+
- Majorclaim: {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0}
|
38 |
+
- O: {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0}
|
39 |
+
- Premise: {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0}
|
40 |
+
- Accuracy: 0.8340
|
41 |
+
- Macro avg: {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0}
|
42 |
+
- Weighted avg: {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0}
|
43 |
|
44 |
## Model description
|
45 |
|
|
|
68 |
|
69 |
### Training results
|
70 |
|
71 |
+
| Training Loss | Epoch | Step | Validation Loss | Claim | Majorclaim | O | Premise | Accuracy | Macro avg | Weighted avg |
|
72 |
+
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|:--------:|:-------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------:|
|
73 |
+
| No log | 1.0 | 41 | 0.5888 | {'precision': 0.49844559585492226, 'recall': 0.2262464722483537, 'f1-score': 0.311226140407635, 'support': 4252.0} | {'precision': 0.6139372822299651, 'recall': 0.40375802016498624, 'f1-score': 0.4871440420237766, 'support': 2182.0} | {'precision': 0.8171685569026202, 'recall': 0.9011320754716982, 'f1-score': 0.8570989078603293, 'support': 9275.0} | {'precision': 0.7903744062587315, 'recall': 0.9274590163934426, 'f1-score': 0.8534469754110725, 'support': 12200.0} | 0.7709 | {'precision': 0.6799814603115598, 'recall': 0.6146488960696201, 'f1-score': 0.6272290164257033, 'support': 27909.0} | {'precision': 0.7410085615761669, 'recall': 0.7709341072772224, 'f1-score': 0.7434134981235008, 'support': 27909.0} |
|
74 |
+
| No log | 2.0 | 82 | 0.4676 | {'precision': 0.574496644295302, 'recall': 0.5032925682031985, 'f1-score': 0.5365425598595963, 'support': 4252.0} | {'precision': 0.6832784184514004, 'recall': 0.7603116406966086, 'f1-score': 0.7197396963123645, 'support': 2182.0} | {'precision': 0.9165271733065506, 'recall': 0.8854986522911051, 'f1-score': 0.9007457775828033, 'support': 9275.0} | {'precision': 0.8488472059398202, 'recall': 0.8902459016393443, 'f1-score': 0.8690538107621524, 'support': 12200.0} | 0.8196 | {'precision': 0.7557873604982683, 'recall': 0.7598371907075642, 'f1-score': 0.7565204611292291, 'support': 27909.0} | {'precision': 0.816596749632328, 'recall': 0.8195564154932101, 'f1-score': 0.8172533792058241, 'support': 27909.0} |
|
75 |
+
| No log | 3.0 | 123 | 0.4384 | {'precision': 0.6117381489841986, 'recall': 0.44614299153339604, 'f1-score': 0.5159798721610226, 'support': 4252.0} | {'precision': 0.7290375877736472, 'recall': 0.8088909257561869, 'f1-score': 0.7668911579404737, 'support': 2182.0} | {'precision': 0.9303112313937754, 'recall': 0.889487870619946, 'f1-score': 0.9094416579397012, 'support': 9275.0} | {'precision': 0.8289074635697906, 'recall': 0.9185245901639344, 'f1-score': 0.8714180178078463, 'support': 12200.0} | 0.8283 | {'precision': 0.774998607930353, 'recall': 0.7657615945183658, 'f1-score': 0.7659326764622609, 'support': 27909.0} | {'precision': 0.8217126501390813, 'recall': 0.8283349457164355, 'f1-score': 0.8217304137626299, 'support': 27909.0} |
|
76 |
+
| No log | 4.0 | 164 | 0.4487 | {'precision': 0.5776205218929678, 'recall': 0.6142991533396049, 'f1-score': 0.5953954866651471, 'support': 4252.0} | {'precision': 0.7034400948991696, 'recall': 0.8153070577451879, 'f1-score': 0.7552536616429633, 'support': 2182.0} | {'precision': 0.9331742243436754, 'recall': 0.8852830188679245, 'f1-score': 0.9085979860573199, 'support': 9275.0} | {'precision': 0.8791773778920309, 'recall': 0.8690163934426229, 'f1-score': 0.8740673564450308, 'support': 12200.0} | 0.8314 | {'precision': 0.7733530547569609, 'recall': 0.795976405848835, 'f1-score': 0.7833286227026153, 'support': 27909.0} | {'precision': 0.837439667749803, 'recall': 0.8314163889784657, 'f1-score': 0.8337974548825171, 'support': 27909.0} |
|
77 |
+
| No log | 5.0 | 205 | 0.4397 | {'precision': 0.5897372943776087, 'recall': 0.5649106302916275, 'f1-score': 0.5770570570570571, 'support': 4252.0} | {'precision': 0.7365996649916248, 'recall': 0.806141154903758, 'f1-score': 0.7698030634573303, 'support': 2182.0} | {'precision': 0.9290423511006817, 'recall': 0.8963881401617251, 'f1-score': 0.9124231782265146, 'support': 9275.0} | {'precision': 0.8642291383310665, 'recall': 0.8854098360655738, 'f1-score': 0.8746912830478967, 'support': 12200.0} | 0.8340 | {'precision': 0.7799021122002454, 'recall': 0.7882124403556711, 'f1-score': 0.7834936454471997, 'support': 27909.0} | {'precision': 0.8339706452686643, 'recall': 0.8340320326776308, 'f1-score': 0.8336850307178961, 'support': 27909.0} |
|
78 |
|
79 |
|
80 |
### Framework versions
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 592324828
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6b40087e659871f06bd6a05a9a59d051f42431deb5ab9aa6b9aa44a5b35facf4
|
3 |
size 592324828
|