-
DSPy Assertions: Computational Constraints for Self-Refining Language Model Pipelines
Paper • 2312.13382 • Published • 3 -
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
Paper • 2310.03714 • Published • 30 -
TextGrad: Automatic "Differentiation" via Text
Paper • 2406.07496 • Published • 26
Collections
Discover the best community collections!
Collections including paper arxiv:2406.07496
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 47 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 73 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 62 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 108
-
Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models
Paper • 2404.02575 • Published • 47 -
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Paper • 2404.12253 • Published • 53 -
SnapKV: LLM Knows What You are Looking for Before Generation
Paper • 2404.14469 • Published • 23 -
FlowMind: Automatic Workflow Generation with LLMs
Paper • 2404.13050 • Published • 32
-
Evaluating Very Long-Term Conversational Memory of LLM Agents
Paper • 2402.17753 • Published • 18 -
StructLM: Towards Building Generalist Models for Structured Knowledge Grounding
Paper • 2402.16671 • Published • 26 -
Do Large Language Models Latently Perform Multi-Hop Reasoning?
Paper • 2402.16837 • Published • 24 -
Divide-or-Conquer? Which Part Should You Distill Your LLM?
Paper • 2402.15000 • Published • 22
-
Why do Learning Rates Transfer? Reconciling Optimization and Scaling Limits for Deep Learning
Paper • 2402.17457 • Published -
Curvature-Informed SGD via General Purpose Lie-Group Preconditioners
Paper • 2402.04553 • Published -
TextGrad: Automatic "Differentiation" via Text
Paper • 2406.07496 • Published • 26 -
Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling
Paper • 2405.14578 • Published
-
PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models
Paper • 2402.08714 • Published • 10 -
Data Engineering for Scaling Language Models to 128K Context
Paper • 2402.10171 • Published • 21 -
RLVF: Learning from Verbal Feedback without Overgeneralization
Paper • 2402.10893 • Published • 10 -
Coercing LLMs to do and reveal (almost) anything
Paper • 2402.14020 • Published • 12
-
DocGraphLM: Documental Graph Language Model for Information Extraction
Paper • 2401.02823 • Published • 34 -
Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters
Paper • 2403.02677 • Published • 16 -
FlashSpeech: Efficient Zero-Shot Speech Synthesis
Paper • 2404.14700 • Published • 29 -
TextGrad: Automatic "Differentiation" via Text
Paper • 2406.07496 • Published • 26