File size: 873 Bytes
fd7bdc8
 
64e6c83
 
 
fd7bdc8
 
 
 
 
 
64e6c83
fd7bdc8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
SmolLM2, a family of compact language models available in three sizes: 135M, 360M, and 1.7B parameters.

In this repo is WASM compiled 1.7B model suitable for [WebLLM](https://llm.mlc.ai/docs/deploy/webllm.html#webllm-runtime)

**SmolLM2-1.7B**

Demonstrates significant improvements over its predecessor, SmolLM1-1.7B, in instruction following, knowledge, reasoning, and mathematics.
Training: Trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu, DCLM, The Stack, and new mathematics and coding datasets.
Fine-Tuning: Developed through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) using UltraFeedback.

**Capabilities:**

Tasks: Supports tasks such as text rewriting, summarization, and function calling.
Datasets: Utilizes datasets developed by Argilla, such as Synth-APIGen-v0.1.

---
license: apache-2.0
---