-
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 99 -
How to Train Data-Efficient LLMs
Paper • 2402.09668 • Published • 38 -
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Paper • 2402.10193 • Published • 17 -
A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts
Paper • 2402.09727 • Published • 35
Collections
Discover the best community collections!
Collections including paper arxiv:2311.04235
-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 143 -
ReFT: Reasoning with Reinforced Fine-Tuning
Paper • 2401.08967 • Published • 27 -
Tuning Language Models by Proxy
Paper • 2401.08565 • Published • 20 -
TrustLLM: Trustworthiness in Large Language Models
Paper • 2401.05561 • Published • 64
-
Can LLMs Follow Simple Rules?
Paper • 2311.04235 • Published • 10 -
The Unreasonable Ineffectiveness of the Deeper Layers
Paper • 2403.17887 • Published • 78 -
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Paper • 2403.03507 • Published • 182 -
Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models
Paper • 2402.17177 • Published • 88
-
GPT4All: An Ecosystem of Open Source Compressed Language Models
Paper • 2311.04931 • Published • 20 -
Can LLMs Follow Simple Rules?
Paper • 2311.04235 • Published • 10 -
Prompt Engineering a Prompt Engineer
Paper • 2311.05661 • Published • 20 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 70