Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
unsloth/SmolLM2-1.7B-Instruct
For more details on the model, please go to Hugging Face's original model card
✨ Finetune for Free
All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Llama-3.2 (3B) | ▶️ Start on Colab | 2.4x faster | 58% less |
Llama-3.2 (11B vision) | ▶️ Start on Colab | 2.4x faster | 58% less |
Llama-3.1 (8B) | ▶️ Start on Colab | 2.4x faster | 58% less |
Phi-3.5 (mini) | ▶️ Start on Colab | 2x faster | 50% less |
Gemma 2 (9B) | ▶️ Start on Colab | 2.4x faster | 58% less |
Mistral (7B) | ▶️ Start on Colab | 2.2x faster | 62% less |
DPO - Zephyr | ▶️ Start on Colab | 1.9x faster | 19% less |
- This conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
- This text completion notebook is for raw text. This DPO notebook replicates Zephyr.
- * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
Special Thanks
A huge thank you to the Hugging Face team for creating and releasing these models.
Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1.
SmolLM2
- Downloads last month
- 706
Model tree for unsloth/SmolLM2-1.7B-Instruct
Base model
HuggingFaceTB/SmolLM2-1.7B