Tiny Fine Tunes🤏
Collection
A collection of SLM (Small Language Model) fine tunes.
•
2 items
•
Updated
Model was submitted to OpenLLM Leaderboard for full evaluation.
In case you got the following error:
exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3
Please upgrade your transformer package, that is, use the following code:
pip install --upgrade "transformers>=4.45"
This model was trained on mlabonne/FineTome-100k for 2 epochs with rslora + qlora, and achieve the final training loss: 0.596400.
This model follows the same chat template as the base model one.
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Base model
meta-llama/Llama-3.2-3B-Instruct