loubnabnl HF staff commited on
Commit
73bccd6
1 Parent(s): 1f7f07d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -8,16 +8,14 @@ pinned: false
8
  ---
9
 
10
  # HuggingFaceTB
11
- This is the home of synthetic datasets for pre-training, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) v1 and v2. We're trying to scale synthetic data generation by curating
12
- diverse prompts that cover a wide range of topics and efficiently scaling the generations on GPUs with tools like [llm-swarm](https://github.com/huggingface/llm-swarm).
13
 
14
  We released:
15
 
16
  - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
17
  - [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia.
18
  - [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content
19
- - [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966): a series of strong small models in three sizes: 135M, 360M and 1.7B
20
  - [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM models including **Cosmopedia v0.2**, **FineWeb-Edu** and **Python-Edu**.
 
21
 
22
  For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm
23
-
 
8
  ---
9
 
10
  # HuggingFaceTB
11
+ This is the home for small LLMs (SmolLM) and high quality pre-training datasets, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) and [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
 
12
 
13
  We released:
14
 
15
  - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1.
16
  - [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia.
17
  - [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content
 
18
  - [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM models including **Cosmopedia v0.2**, **FineWeb-Edu** and **Python-Edu**.
19
+ - [SmolLM models](https://huggingface.co/collections/HuggingFaceTB/smollm-6695016cad7167254ce15966) and [SmolLM2](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
20
 
21
  For more details check our blog posts: https://huggingface.co/blog/cosmopedia and https://huggingface.co/blog/smollm