--- license: apache-2.0 datasets: - togethercomputer/RedPajama-Data-1T language: - en pipeline_tag: text-generation library_name: transformers --- ## BSL-160M [paper](https://arxiv.org/abs/2410.07064) | [code](https://github.com/microsoft/LMOps/tree/main/data_selection) **BSL-160M** is a 160M model with [Mistral](https://arxiv.org/abs/2310.06825) achitecture pre-trained from scratch on the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data). **It is used as the baseline for [PDS-160M](https://huggingface.co/Data-Selection/PDS-160M).** ### Evaluation PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.
### Citation ```bibtex @article{gu2024data, title={Data Selection via Optimal Control for Language Models}, author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie}, journal={arXiv preprint arXiv:2410.07064}, year={2024} } ```