-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 143 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 11 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 50 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 44
Collections
Discover the best community collections!
Collections including paper arxiv:2410.14940
-
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Paper • 2402.04252 • Published • 25 -
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Paper • 2402.03749 • Published • 12 -
ScreenAI: A Vision-Language Model for UI and Infographics Understanding
Paper • 2402.04615 • Published • 38 -
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
Paper • 2402.05008 • Published • 19
-
The Llama 3 Herd of Models
Paper • 2407.21783 • Published • 105 -
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution
Paper • 2409.12191 • Published • 73 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
A Survey of Small Language Models
Paper • 2410.20011 • Published • 36
-
PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment
Paper • 2410.13785 • Published • 18 -
Aligning Large Language Models via Self-Steering Optimization
Paper • 2410.17131 • Published • 19 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation
Paper • 2410.14745 • Published • 45
-
Aligning Teacher with Student Preferences for Tailored Training Data Generation
Paper • 2406.19227 • Published • 24 -
Pre-training Distillation for Large Language Models: A Design Space Exploration
Paper • 2410.16215 • Published • 15 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
MiniPLM: Knowledge Distillation for Pre-Training Language Models
Paper • 2410.17215 • Published • 12
-
FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors
Paper • 2410.16271 • Published • 80 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree
Paper • 2410.16268 • Published • 65 -
AutoTrain: No-code training for state-of-the-art models
Paper • 2410.15735 • Published • 54
-
Instruction Following without Instruction Tuning
Paper • 2409.14254 • Published • 27 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
CompassJudger-1: All-in-one Judge Model Helps Model Evaluation and Evolution
Paper • 2410.16256 • Published • 58 -
Infinity-MM: Scaling Multimodal Performance with Large-Scale and High-Quality Instruction Data
Paper • 2410.18558 • Published • 17
-
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models
Paper • 2306.04757 • Published • 6 -
Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation
Paper • 2308.01240 • Published • 2 -
Can Large Language Models Understand Real-World Complex Instructions?
Paper • 2309.09150 • Published • 2 -
Evaluating the Instruction-Following Robustness of Large Language Models to Prompt Injection
Paper • 2308.10819 • Published
-
Law of Vision Representation in MLLMs
Paper • 2408.16357 • Published • 92 -
CogVLM2: Visual Language Models for Image and Video Understanding
Paper • 2408.16500 • Published • 56 -
Learning to Move Like Professional Counter-Strike Players
Paper • 2408.13934 • Published • 21 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 116