Post
1779
Last week we were blessed with open-source models! A recap π
merve/nov-29-releases-674ccc255a57baf97b1e2d31
πΌοΈ Multimodal
> At Hugging Face we released SmolVLM, a performant and efficient smol vision language model π
> Show Lab released ShowUI-2B: new vision-language-action model to build GUI/web automation agents π€
> Rhymes AI has released the base model of Aria: Aria-Base-64K and Aria-Base-8K with their respective context length
> ViDoRe team released ColSmolVLM: A new ColPali-like retrieval model based on SmolVLM
> Dataset: Llava-CoT-o1-Instruct: new dataset labelled using Llava-CoT multimodal reasoning modelπ
> Dataset: LLaVA-CoT-100k dataset used to train Llava-CoT released by creators of Llava-CoT π
π¬ LLMs
> Qwen team released QwQ-32B-Preview, state-of-the-art open-source reasoning model, broke the internet π₯
> AliBaba has released Marco-o1, a new open-source reasoning model π₯
> NVIDIA released Hymba 1.5B Base and Instruct, the new state-of-the-art SLMs with hybrid architecture (Mamba + transformer)
β―οΈ Image/Video Generation
> Qwen2VL-Flux: new image generation model based on Qwen2VL image encoder, T5 and Flux for generation
> Lightricks released LTX-Video, a new DiT-based video generation model that can generate 24 FPS videos at 768x512 res β―οΈ
> Dataset: Image Preferences is a new image generation preference dataset made with DIBT community effort of Argilla π·οΈ
Audio
> OuteAI released OuteTTS-0.2-500M new multilingual text-to-speech model based on Qwen-2.5-0.5B trained on 5B audio prompt tokens
merve/nov-29-releases-674ccc255a57baf97b1e2d31
πΌοΈ Multimodal
> At Hugging Face we released SmolVLM, a performant and efficient smol vision language model π
> Show Lab released ShowUI-2B: new vision-language-action model to build GUI/web automation agents π€
> Rhymes AI has released the base model of Aria: Aria-Base-64K and Aria-Base-8K with their respective context length
> ViDoRe team released ColSmolVLM: A new ColPali-like retrieval model based on SmolVLM
> Dataset: Llava-CoT-o1-Instruct: new dataset labelled using Llava-CoT multimodal reasoning modelπ
> Dataset: LLaVA-CoT-100k dataset used to train Llava-CoT released by creators of Llava-CoT π
π¬ LLMs
> Qwen team released QwQ-32B-Preview, state-of-the-art open-source reasoning model, broke the internet π₯
> AliBaba has released Marco-o1, a new open-source reasoning model π₯
> NVIDIA released Hymba 1.5B Base and Instruct, the new state-of-the-art SLMs with hybrid architecture (Mamba + transformer)
β―οΈ Image/Video Generation
> Qwen2VL-Flux: new image generation model based on Qwen2VL image encoder, T5 and Flux for generation
> Lightricks released LTX-Video, a new DiT-based video generation model that can generate 24 FPS videos at 768x512 res β―οΈ
> Dataset: Image Preferences is a new image generation preference dataset made with DIBT community effort of Argilla π·οΈ
Audio
> OuteAI released OuteTTS-0.2-500M new multilingual text-to-speech model based on Qwen-2.5-0.5B trained on 5B audio prompt tokens