MiniPLM: Knowledge Distillation for Pre-Training Language Models Paper • 2410.17215 • Published 16 days ago • 12
Data Selection via Optimal Control for Language Models Paper • 2410.07064 • Published 29 days ago • 8
Compact Language Models via Pruning and Knowledge Distillation Paper • 2407.14679 • Published Jul 19 • 37
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context Paper • 2403.05530 • Published Mar 8 • 59
An Emulator for Fine-Tuning Large Language Models using Small Language Models Paper • 2310.12962 • Published Oct 19, 2023 • 14
Retentive Network: A Successor to Transformer for Large Language Models Paper • 2307.08621 • Published Jul 17, 2023 • 170