Baseline Models
Collection
Models Trained on Redpajama-CC
•
4 items
•
Updated
BSL-160M is a 160M model with Mistral achitecture pre-trained from scratch on the CC split of Redpajama.
It is used as the baseline for PDS-160M.
PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.
@article{gu2024data,
title={Data Selection via Optimal Control for Language Models},
author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
journal={arXiv preprint arXiv:2410.07064},
year={2024}
}