Hugging Face inference endpoints
#18
by
AlexNevo
- opened
Hi,
I would like to configure a Hugging Face inference endpoint to deploy this model. What would you recommend ? Expecially for the section about Max Input Length (per Query), Max Number of Tokens (per Query), Max Batch Prefill Tokens and Max Batch Total Tokens considering that my DDL is about 4500 tokens long :
Hi
@AlexNevo
, you may refer to their documentation for how to set the parameters.
I believe you would need to increase the max input length given your large DDL size.
https://huggingface.co/docs/inference-endpoints/index