Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
WauplinΒ 
posted an update Jul 18
Post
1974
πŸš€ Just released version 0.24.0 of the πš‘πšžπšπšπš’πš—πšπšπšŠπšŒπšŽ_πš‘πšžπš‹ Python library!

Exciting updates include:
⚑ InferenceClient is now a drop-in replacement for OpenAI's chat completion!

✨ Support for response_format, adapter_id , truncate, and more in InferenceClient

πŸ’Ύ Serialization module with a save_torch_model helper that handles shared layers, sharding, naming convention, and safe serialization. Basically a condensed version of logic scattered across safetensors, transformers , accelerate

πŸ“ Optimized HfFileSystem to avoid getting rate limited when browsing HuggingFaceFW/fineweb

πŸ”¨ HfApi & CLI improvements: prevent empty commits, create repo inside resource group, webhooks API, more options in the Search API, etc.

Check out the full release notes for more details:
Wauplin/huggingface_hub#7
πŸ‘€

Nice work πŸ”₯

have they done anything with agents and tools ? or is this for inference clients only ? how can we implement this loacally ?

Β·

Are you referring to Agents in transformers? If yes, here is the docs about it: https://huggingface.co/docs/transformers/agents. Regarding tools, TGI supports them and the InferenceClient from huggingface_hub as well, meaning you can pass tools to chat_completion (see "Example using tools:" section in https://huggingface.co/docs/huggingface_hub/v0.24.0/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion). These tools parameters were already available on huggingface_hub 0.23.x.

Hope this answers your question :)

Thanks a lot @Wauplin !

Please how it work I don’t really use it

Β·

Depends what you want to do. We have full documentation here: https://huggingface.co/docs/huggingface_hub/index. You can find many guides showing you how to use the library.