-
Pangea: A Fully Open Multilingual Multimodal LLM for 39 Languages
Paper • 2410.16153 • Published • 44 -
AutoTrain: No-code training for state-of-the-art models
Paper • 2410.15735 • Published • 59 -
The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio
Paper • 2410.12787 • Published • 31 -
LEOPARD : A Vision Language Model For Text-Rich Multi-Image Tasks
Paper • 2410.01744 • Published • 26
Collections
Discover the best community collections!
Collections including paper arxiv:2412.14835
-
LinFusion: 1 GPU, 1 Minute, 16K Image
Paper • 2409.02097 • Published • 33 -
Phidias: A Generative Model for Creating 3D Content from Text, Image, and 3D Conditions with Reference-Augmented Diffusion
Paper • 2409.11406 • Published • 26 -
Diffusion Models Are Real-Time Game Engines
Paper • 2408.14837 • Published • 122 -
Segment Anything with Multiple Modalities
Paper • 2408.09085 • Published • 21
-
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
Paper • 2408.16767 • Published • 30 -
DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation
Paper • 2411.16657 • Published • 17 -
Autoregressive Video Generation without Vector Quantization
Paper • 2412.14169 • Published • 14 -
Progressive Multimodal Reasoning via Active Retrieval
Paper • 2412.14835 • Published • 72
-
Human-like Episodic Memory for Infinite Context LLMs
Paper • 2407.09450 • Published • 60 -
MUSCLE: A Model Update Strategy for Compatible LLM Evolution
Paper • 2407.09435 • Published • 22 -
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
Paper • 2407.09121 • Published • 5 -
ChatQA 2: Bridging the Gap to Proprietary LLMs in Long Context and RAG Capabilities
Paper • 2407.14482 • Published • 26
-
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Paper • 2406.16860 • Published • 60 -
Understanding Alignment in Multimodal LLMs: A Comprehensive Study
Paper • 2407.02477 • Published • 22 -
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 51 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 124
-
BLINK: Multimodal Large Language Models Can See but Not Perceive
Paper • 2404.12390 • Published • 24 -
TextSquare: Scaling up Text-Centric Visual Instruction Tuning
Paper • 2404.12803 • Published • 29 -
Groma: Localized Visual Tokenization for Grounding Multimodal Large Language Models
Paper • 2404.13013 • Published • 30 -
InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Paper • 2404.06512 • Published • 30
-
MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training
Paper • 2403.09611 • Published • 125 -
Evolutionary Optimization of Model Merging Recipes
Paper • 2403.13187 • Published • 51 -
MobileVLM V2: Faster and Stronger Baseline for Vision Language Model
Paper • 2402.03766 • Published • 13 -
LLM Agent Operating System
Paper • 2403.16971 • Published • 65
-
Context Tuning for Retrieval Augmented Generation
Paper • 2312.05708 • Published • 17 -
Dense X Retrieval: What Retrieval Granularity Should We Use?
Paper • 2312.06648 • Published • 1 -
RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
Paper • 2401.18059 • Published • 36 -
Retrieval-Augmented Generation for Large Language Models: A Survey
Paper • 2312.10997 • Published • 10