shirman commited on
Commit
64e6c83
1 Parent(s): fd7bdc8
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,13 +1,15 @@
1
  SmolLM2, a family of compact language models available in three sizes: 135M, 360M, and 1.7B parameters.
2
 
3
- In this repo is WASM compiled 1.7B model suitable for (WebLLM)[https://llm.mlc.ai/docs/deploy/webllm.html#webllm-runtime]
 
 
4
 
5
- **SmolLM2-1.7B**:
6
  Demonstrates significant improvements over its predecessor, SmolLM1-1.7B, in instruction following, knowledge, reasoning, and mathematics.
7
  Training: Trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu, DCLM, The Stack, and new mathematics and coding datasets.
8
  Fine-Tuning: Developed through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) using UltraFeedback.
9
 
10
  **Capabilities:**
 
11
  Tasks: Supports tasks such as text rewriting, summarization, and function calling.
12
  Datasets: Utilizes datasets developed by Argilla, such as Synth-APIGen-v0.1.
13
 
 
1
  SmolLM2, a family of compact language models available in three sizes: 135M, 360M, and 1.7B parameters.
2
 
3
+ In this repo is WASM compiled 1.7B model suitable for [WebLLM](https://llm.mlc.ai/docs/deploy/webllm.html#webllm-runtime)
4
+
5
+ **SmolLM2-1.7B**
6
 
 
7
  Demonstrates significant improvements over its predecessor, SmolLM1-1.7B, in instruction following, knowledge, reasoning, and mathematics.
8
  Training: Trained on 11 trillion tokens using a diverse dataset combination including FineWeb-Edu, DCLM, The Stack, and new mathematics and coding datasets.
9
  Fine-Tuning: Developed through supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) using UltraFeedback.
10
 
11
  **Capabilities:**
12
+
13
  Tasks: Supports tasks such as text rewriting, summarization, and function calling.
14
  Datasets: Utilizes datasets developed by Argilla, such as Synth-APIGen-v0.1.
15