Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -8,13 +8,16 @@ pinned: false
|
|
8 |
---
|
9 |
|
10 |
# HuggingFaceTB
|
11 |
-
This is the home of synthetic datasets for pre-training, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia). We're trying to scale synthetic data generation by curating
|
12 |
diverse prompts that cover a wide range of topics and efficiently scaling the generations on GPUs with tools like [llm-swarm](https://github.com/huggingface/llm-swarm).
|
13 |
|
14 |
-
We
|
15 |
|
16 |
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
|
17 |
- [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia.
|
|
|
|
|
|
|
18 |
|
19 |
-
For more details check our
|
20 |
|
|
|
8 |
---
|
9 |
|
10 |
# HuggingFaceTB
|
11 |
+
This is the home of synthetic datasets for pre-training, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) v1 and v2. We're trying to scale synthetic data generation by curating
|
12 |
diverse prompts that cover a wide range of topics and efficiently scaling the generations on GPUs with tools like [llm-swarm](https://github.com/huggingface/llm-swarm).
|
13 |
|
14 |
+
We released:
|
15 |
|
16 |
- [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
|
17 |
- [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia.
|
18 |
+
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content
|
19 |
+
- [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966): a series of strong small models in three sizes: 135M, 360M and 1.7B
|
20 |
+
- [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM models including **Cosmopedia v0.2**, **FineWeb-Edu** and **Python-Edu**.
|
21 |
|
22 |
+
For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm
|
23 |
|