Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
62
2
11
Régis Pierrard
regisss
Follow
mrvalentine321456's profile picture
Aurelien-Morgan's profile picture
fralange's profile picture
59 followers
·
17 following
regisss
AI & ML interests
None yet
Recent Activity
updated
a dataset
about 1 month ago
regisss/benchmarks
Reacted to
onekq
's
post
with 🔥
about 1 month ago
I'm now working on finetuning of coding models. If you are GPU-hungry like me, you will find quantized models very helpful. But quantization for finetuning and inference are different and incompatible. So I made two collections here. Inference (GGUF, via Ollama, CPU is enough) https://huggingface.co/collections/onekq-ai/ollama-ready-coding-models-67118c3cfa1af2cf04a926d6 Finetuning (Bitsandbytes, QLora, GPU is needed) https://huggingface.co/collections/onekq-ai/qlora-ready-coding-models-67118771ce001b8f4cf946b2 For quantization, the inference models are far more popular on HF than finetuning models. I use https://huggingface.co/QuantFactory to generate inference models (GGUF), and there are a few other choices. But there hasn't been such a service for finetuning models. DIY isn't too hard though. I made a few myself and you can find the script in the model cards. If the original model is small enough, you can even do it on a free T4 (available via Google Colab). If you know a (small) coding model worthy of quantization, please let me know and I'd love to add it to the collections.
posted
an
update
about 1 month ago
Interested in performing inference with an ONNX model?⚡️ The Optimum docs about model inference with ONNX Runtime is now much clearer and simpler! You want to deploy your favorite model on the hub but you don't know how to export it to the ONNX format? You can do it in one line of code as follows: ```py from optimum.onnxruntime import ORTModelForSequenceClassification # Load the model from the hub and export it to the ONNX format model_id = "distilbert-base-uncased-finetuned-sst-2-english" model = ORTModelForSequenceClassification.from_pretrained(model_id, export=True) ``` Check out the whole guide 👉 https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models
View all activity
Articles
Organizing a Privacy-preserving Hackathon
Oct 17
•
8
Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2
Jun 29, 2023
•
2
Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator
Mar 28, 2023
•
1
Faster Training and Inference: Habana Gaudi®2 vs Nvidia A100 80GB
Dec 14, 2022
•
1
Organizations
regisss
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
upvoted
an
article
about 1 month ago
view article
Article
Organizing a Privacy-preserving Hackathon
By
binoua
•
Oct 17
•
8
upvoted
an
article
7 months ago
view article
Article
Energy Scores for AI Models
By
sasha
•
May 9
•
30