-
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 143 -
Orion-14B: Open-source Multilingual Large Language Models
Paper • 2401.12246 • Published • 11 -
MambaByte: Token-free Selective State Space Model
Paper • 2401.13660 • Published • 50 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 44
Collections
Discover the best community collections!
Collections including paper arxiv:2410.18451
-
PopAlign: Diversifying Contrasting Patterns for a More Comprehensive Alignment
Paper • 2410.13785 • Published • 18 -
Aligning Large Language Models via Self-Steering Optimization
Paper • 2410.17131 • Published • 19 -
Baichuan Alignment Technical Report
Paper • 2410.14940 • Published • 47 -
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation
Paper • 2410.14745 • Published • 45
-
Skywork-Reward: Bag of Tricks for Reward Modeling in LLMs
Paper • 2410.18451 • Published • 13 -
Skywork/Skywork-Reward-Gemma-2-27B-v0.2
Text Classification • Updated • 1.93k • 13 -
Skywork/Skywork-Reward-Llama-3.1-8B-v0.2
Text Classification • Updated • 240k • 10 -
Skywork/Skywork-Reward-Gemma-2-27B
Text Classification • Updated • 221k • 36
-
A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions
Paper • 2312.08578 • Published • 16 -
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Paper • 2312.08583 • Published • 9 -
Vision-Language Models as a Source of Rewards
Paper • 2312.09187 • Published • 11 -
StemGen: A music generation model that listens
Paper • 2312.08723 • Published • 47
-
Trusted Source Alignment in Large Language Models
Paper • 2311.06697 • Published • 10 -
Diffusion Model Alignment Using Direct Preference Optimization
Paper • 2311.12908 • Published • 47 -
SuperHF: Supervised Iterative Learning from Human Feedback
Paper • 2310.16763 • Published • 1 -
Enhancing Diffusion Models with Text-Encoder Reinforcement Learning
Paper • 2311.15657 • Published • 2