Llama-3.1-8B_auto
Collection
36 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1_auto dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2096 | 0.6957 | 2 | 2.1129 |
2.167 | 1.7391 | 5 | 1.9558 |
1.8726 | 2.7826 | 8 | 1.7428 |
1.7678 | 3.8261 | 11 | 1.5017 |
1.3895 | 4.8696 | 14 | 1.2525 |
1.234 | 5.9130 | 17 | 1.0325 |
0.9378 | 6.9565 | 20 | 0.9271 |
0.8782 | 8.0 | 23 | 0.8920 |
0.8394 | 8.6957 | 25 | 0.8784 |
0.7845 | 9.7391 | 28 | 0.8647 |
0.7863 | 10.7826 | 31 | 0.8503 |
0.7261 | 11.8261 | 34 | 0.8417 |
0.7333 | 12.8696 | 37 | 0.8337 |
0.6709 | 13.9130 | 40 | 0.8289 |
0.6612 | 14.9565 | 43 | 0.8270 |
0.6253 | 16.0 | 46 | 0.8289 |
0.6012 | 16.6957 | 48 | 0.8323 |
0.5792 | 17.7391 | 51 | 0.8385 |
0.5162 | 18.7826 | 54 | 0.8561 |
0.5219 | 19.8261 | 57 | 0.8603 |
0.445 | 20.8696 | 60 | 0.8802 |
0.4396 | 21.9130 | 63 | 0.9046 |
Base model
meta-llama/Llama-3.1-8B