Text Generation
Transformers
PyTorch
English
mistral
text-generation-inference
Inference Endpoints
Edit model card

BSL-160M

paper | code

BSL-160M is a 160M model with Mistral achitecture pre-trained from scratch on the CC split of Redpajama.

It is used as the baseline for PDS-160M.

Evaluation

PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.

Citation

@article{gu2024data,
  title={Data Selection via Optimal Control for Language Models},
  author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
  journal={arXiv preprint arXiv:2410.07064},
  year={2024}
}
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Data-Selection/BSL-160M

Collection including Data-Selection/BSL-160M