Text Generation
PyTorch
English
gpt2

MiniLLM-gpt2-760M

paper | code

MiniLLM-gpt2-760M is a gpt2-large (760M) model distilled from gpt2-xlarge (1.5B) on databricks-dolly-15k

Note: MiniLLM requires a SFT model for initilization to perform the PPO optimization.

Evaluation

We ask GPT-4 to give scores for the generated responses of MiniLLM. The prompts are taken from databricks-dolly-15k (test set), self-instruct, and vicuna

Baseline Models

Citation

@inproceedings{minillm,
  title={MiniLLM: Knowledge Distillation of Large Language Models},
  author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie},
  booktitle={Proceedings of ICLR},
  year={2024}
}
Downloads last month
16
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for MiniLLM/MiniLLM-gpt2-760M

Finetuned
(65)
this model

Dataset used to train MiniLLM/MiniLLM-gpt2-760M

Collection including MiniLLM/MiniLLM-gpt2-760M