File size: 1,208 Bytes
02077f9 70f8583 02077f9 1581b9d 02077f9 70f8583 02077f9 1581b9d 02077f9 1581b9d 02077f9 70f8583 02077f9 70f8583 02077f9 70f8583 02077f9 70f8583 02077f9 70f8583 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
## BSL-470M
[paper](https://arxiv.org/abs/2410.07064) | [code](https://github.com/microsoft/LMOps/tree/main/data_selection)
**BSL-470M** is a 470M model with [Mistral](https://arxiv.org/abs/2310.06825) achitecture pre-trained from scratch on the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data).
**It is used as the baseline for [PDS-470M](https://huggingface.co/Data-Selection/PDS-470M).**
### Evaluation
PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/6undIr37d10qD73TDiPDK.png" width="600">
</p>
### Citation
```bibtex
@article{gu2024data,
title={Data Selection via Optimal Control for Language Models},
author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
journal={arXiv preprint arXiv:2410.07064},
year={2024}
}
```
|