Configurations choice
Collection
Choice of configuration based on the results of different fine-tuning. All provide mor or less same results but 1 and 2 are way faster! (lr)
•
52 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-3, the GaetanMichelet/chat-120_ft_task-3 and the GaetanMichelet/chat-180_ft_task-3 datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.5495 | 0.9412 | 8 | 2.5077 |
2.3733 | 2.0 | 17 | 2.4775 |
2.46 | 2.9412 | 25 | 2.4218 |
2.4585 | 4.0 | 34 | 2.3278 |
2.2624 | 4.9412 | 42 | 2.1919 |
2.0553 | 6.0 | 51 | 1.9704 |
1.7403 | 6.9412 | 59 | 1.7066 |
1.3756 | 8.0 | 68 | 1.3617 |
1.11 | 8.9412 | 76 | 1.0613 |
0.7161 | 10.0 | 85 | 0.7772 |
0.7609 | 10.9412 | 93 | 0.6787 |
0.4358 | 12.0 | 102 | 0.6182 |
0.4774 | 12.9412 | 110 | 0.5912 |
0.5569 | 14.0 | 119 | 0.5746 |
0.427 | 14.9412 | 127 | 0.5487 |
0.4672 | 16.0 | 136 | 0.5339 |
0.3495 | 16.9412 | 144 | 0.5525 |
0.4731 | 18.0 | 153 | 0.5323 |
0.3913 | 18.9412 | 161 | 0.5243 |
0.5624 | 20.0 | 170 | 0.5253 |
0.4684 | 20.9412 | 178 | 0.5222 |
0.3029 | 22.0 | 187 | 0.5100 |
0.3522 | 22.9412 | 195 | 0.5085 |
0.3855 | 24.0 | 204 | 0.4971 |
0.317 | 24.9412 | 212 | 0.5049 |
0.338 | 26.0 | 221 | 0.5016 |
0.391 | 26.9412 | 229 | 0.4942 |
0.3964 | 28.0 | 238 | 0.5010 |
0.2951 | 28.9412 | 246 | 0.5098 |
0.4021 | 30.0 | 255 | 0.5068 |
0.4021 | 30.9412 | 263 | 0.5070 |
0.3456 | 32.0 | 272 | 0.5025 |
0.4431 | 32.9412 | 280 | 0.5050 |
0.4131 | 34.0 | 289 | 0.5094 |
Base model
meta-llama/Llama-3.1-8B