--- title: README emoji: 👁 colorFrom: purple colorTo: green sdk: static pinned: false --- # HuggingFaceTB This is the home of synthetic datasets for pre-training, such as [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia). We're trying to scale synthetic data generation by curating diverse prompts that cover a wide range of topics and efficiently scaling the generations on GPUs with tools like [llm-swarm](https://github.com/huggingface/llm-swarm). We recently released: - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and more than 30M samples. It contains synthetic textbooks, blog posts, stories, posts, and WikiHow articles generated by Mixtral-8x7B-Instruct-v0.1. - [Cosmo-1B](https://huggingface.co/HuggingFaceTB/cosmo-1b) a 1B model trained on Cosmopedia. For more details check our blogpost: https://huggingface.co/blog/cosmopedia