Post
418
[SATURDAY ROUNDUP] ☕️🧑🎓
In case you missed everything this week. It’s all about vision language models and image preference datasets. Here are the models and datasets you can use in your projects.
QWQ-32B-Preview is the first open weights model to reason like o1 with comparable performance. It’s large but is acing some of the hardest tasks.
https://bsky.app/profile/philschmid.bsky.social/post/3lbylz6nzqk25
SmolVLM is a vision implementation of the recently released SmolLM2. It uses the Idefics3 approach to add a vision encoder. The main difference being the smaller language model (8b > 1.7b) and more compression of images. This results in a model that is very accurate for its memory footprint.
https://huggingface.co/blog/smolvlm
ColSmolVLM is a vision embedding model based on SmolVLM using the Colbert approach from ColPali. This is shown to be great at document retrieval and everyone should test it out in their RAG setups.
https://huggingface.co/posts/merve/663466156074132
In an effort to build a FLUX level open source image generation model, the community is building a dataset of image preferences. The dataset is already open and the project is still running. Join in!
https://huggingface.co/posts/davidberenstein1957/405018978675827
TRL tutorial Drop - This week I dropped a load of tutorials on finetuning and aligning models with TRL. If you’re upskilling in this space, you should check these out.
https://bsky.app/profile/benburtenshaw.bsky.social/post/3lbrc56ap3222
In case you missed everything this week. It’s all about vision language models and image preference datasets. Here are the models and datasets you can use in your projects.
QWQ-32B-Preview is the first open weights model to reason like o1 with comparable performance. It’s large but is acing some of the hardest tasks.
https://bsky.app/profile/philschmid.bsky.social/post/3lbylz6nzqk25
SmolVLM is a vision implementation of the recently released SmolLM2. It uses the Idefics3 approach to add a vision encoder. The main difference being the smaller language model (8b > 1.7b) and more compression of images. This results in a model that is very accurate for its memory footprint.
https://huggingface.co/blog/smolvlm
ColSmolVLM is a vision embedding model based on SmolVLM using the Colbert approach from ColPali. This is shown to be great at document retrieval and everyone should test it out in their RAG setups.
https://huggingface.co/posts/merve/663466156074132
In an effort to build a FLUX level open source image generation model, the community is building a dataset of image preferences. The dataset is already open and the project is still running. Join in!
https://huggingface.co/posts/davidberenstein1957/405018978675827
TRL tutorial Drop - This week I dropped a load of tutorials on finetuning and aligning models with TRL. If you’re upskilling in this space, you should check these out.
https://bsky.app/profile/benburtenshaw.bsky.social/post/3lbrc56ap3222