Jeff Boudier

jeffboudier

AI & ML interests

Hugging Face!

Recent Activity

Articles

Organizations

Hugging Face's profile picture Renault Group's profile picture Intel's profile picture Spaces-explorers's profile picture Qualcomm's profile picture julsimon-test's profile picture AWS Inferentia and Trainium's profile picture Spotify's profile picture Amazon SageMaker Community's profile picture Hugging Face Infinity's profile picture Demo Corp's profile picture Habana AI's profile picture Hugging Face Optimum's profile picture Hugging Test Lab's profile picture Ericsson Global AI Accelerator's profile picture WIP's profile picture Evaluation on the Hub's profile picture HuggingFaceM4's profile picture Hackathon Team 1's profile picture Open-Source AI Meetup's profile picture AMD's profile picture model-attribution-challenge-admin's profile picture model-attribution-challenge's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture EU org's profile picture Enterprise Explorers's profile picture Optimum Nvidia's profile picture Social Post Explorers's profile picture Optimum-Intel's profile picture Hugging Face Machine Learning Optimization's profile picture Hugging Face Party @ PyTorch Conference's profile picture Google Cloud ๐Ÿค๐Ÿป Hugging Face's profile picture Huggingface HUGS's profile picture Nerdy Face's profile picture open/ acc's profile picture

jeffboudier's activity

Reacted to andito's post with โค๏ธ 3 days ago
view post
Post
2970
Let's go! We are releasing SmolVLM, a smol 2B VLM built for on-device inference that outperforms all models at similar GPU RAM usage and tokens throughputs.

- SmolVLM generates tokens 7.5 to 16 times faster than Qwen2-VL! ๐Ÿคฏ
- Other models at this size crash a laptop, but SmolVLM comfortably generates 17 tokens/sec on a macbook! ๐Ÿš€
- SmolVLM can be fine-tuned on a Google collab! Or process millions of documents with a consumer GPU!
- SmolVLM even outperforms larger models in video benchmarks, despite not even being trained on videos!

Check out more!
Demo: HuggingFaceTB/SmolVLM
Blog: https://huggingface.co/blog/smolvlm
Model: HuggingFaceTB/SmolVLM-Instruct
Fine-tuning script: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
posted an update 8 days ago
replied to clem's post about 1 month ago
replied to clem's post about 1 month ago
view reply

๐Ÿ“† Wed Oct 30th - 9am PT / 12pm ET / 18h CET
Can't wait!

Reacted to clem's post with โค๏ธ๐Ÿค—๐Ÿ”ฅ๐Ÿš€ about 1 month ago
view post
Post
4409
This is no Woodstock AI but will be fun nonetheless haha. Iโ€™ll be hosting a live workshop with team members next week about the Enterprise Hugging Face hub.

1,000 spots available first-come first serve with some surprises during the stream!

You can register and add to your calendar here: https://streamyard.com/watch/JS2jHsUP3NDM
ยท
Reacted to victor's post with ๐Ÿš€โค๏ธ๐Ÿ”ฅ๐Ÿค— about 2 months ago
view post
Post
2659
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome ๐Ÿ˜Š
  • 2 replies
ยท
posted an update about 2 months ago
upvoted an article about 2 months ago