Decoder Information Extraction
Collection
3 items
•
Updated
This model is a fine-tuned version of LeoLM/leo-mistral-hessianai-7b on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
---|---|---|---|---|
0.7643 | 0.25 | 1377 | 0.7811 | 2763564 |
0.6186 | 0.5 | 2754 | 0.6358 | 5558436 |
0.5504 | 0.75 | 4131 | 0.5620 | 8313712 |
0.5311 | 1.0 | 5508 | 0.5014 | 11183370 |
0.3525 | 1.25 | 6885 | 0.4752 | 13968826 |
0.4545 | 1.5 | 8262 | 0.4221 | 16892566 |
0.4189 | 1.75 | 9639 | 0.3897 | 19596674 |
0.297 | 2.0 | 11016 | 0.3561 | 22472546 |
0.2219 | 2.25 | 12393 | 0.3447 | 25171614 |
0.2962 | 2.5 | 13770 | 0.3171 | 28070134 |
0.2244 | 2.75 | 15147 | 0.2974 | 30841414 |
0.307 | 3.0 | 16524 | 0.2774 | 33754586 |
0.1942 | 3.25 | 17901 | 0.2824 | 36458478 |
0.2212 | 3.5 | 19278 | 0.2762 | 39268846 |
0.1737 | 3.75 | 20655 | 0.2713 | 42149206 |
0.2141 | 4.0 | 22032 | 0.2692 | 44987298 |
0.2088 | 4.25 | 23409 | 0.2795 | 47803982 |
0.0971 | 4.5 | 24786 | 0.2801 | 50611334 |
0.1567 | 4.75 | 26163 | 0.2802 | 53414782 |
0.1923 | 5.0 | 27540 | 0.2802 | 56173093 |
Base model
LeoLM/leo-mistral-hessianai-7b