FastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization Paper • 2303.14189 • Published Mar 24, 2023 • 3
SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding Paper • 2310.15308 • Published Oct 23, 2023 • 22
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training Paper • 2311.17049 • Published Nov 28, 2023 • 1
Graph-Based Captioning: Enhancing Visual Descriptions by Interconnecting Region Captions Paper • 2407.06723 • Published Jul 9 • 10
Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum Paper • 2405.13226 • Published May 21 • 1
CLIP with Quality Captions: A Strong Pretraining for Vision Tasks Paper • 2405.08911 • Published May 14 • 1
MobileCLIP Models + DataCompDR Data Collection MobileCLIP: Mobile-friendly image-text models with SOTA zero-shot capabilities. DataCompDR: Improved datasets for training image-text SOTA models. • 22 items • Updated Oct 4 • 25