Chat Completion
Generate a response given a list of messages in a conversational context, supporting both conversational Language Models (LLMs) and conversational Vision-Language Models (VLMs).
This is a subtask of text-generation
and image-text-to-text
.
Recommended models
Conversational Large Language Models (LLMs)
- google/gemma-2-2b-it: A text-generation model trained to follow instructions.
- meta-llama/Meta-Llama-3.1-8B-Instruct: Very powerful text generation model trained to follow instructions.
- microsoft/Phi-3-mini-4k-instruct: Small yet powerful text generation model.
- Qwen/Qwen2.5-7B-Instruct: Strong text generation model to follow instructions.
Conversational Vision-Language Models (VLMs)
- meta-llama/Llama-3.2-11B-Vision-Instruct: Powerful vision language model with great visual understanding and reasoning capabilities.
- Qwen/Qwen2-VL-7B-Instruct: Strong image-text-to-text model.
API Playground
For Chat Completion models, we provide an interactive UI Playground for easier testing:
- Quickly iterate on your prompts from the UI.
- Set and override system, assistant and user messages.
- Browse and select models currently available on the Inference API.
- Compare the output of two models side-by-side.
- Adjust requests parameters from the UI.
- Easily switch between UI view and code snippets.
Access the Inference UI Playground and start exploring: https://huggingface.co/playground
Using the API
The API supports:
- Using the chat completion API compatible with the OpenAI SDK.
- Using grammars, constraints, and tools.
- Streaming the output
Code snippet example for conversational LLMs
Using huggingface_hub
:
from huggingface_hub import InferenceClient
client = InferenceClient(api_key="hf_***")
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
stream = client.chat.completions.create(
model="google/gemma-2-2b-it",
messages=messages,
max_tokens=500,
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
Using openai
:
from openai import OpenAI
client = OpenAI(
base_url="https://api-inference.huggingface.co/v1/",
api_key="hf_***"
)
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
stream = client.chat.completions.create(
model="google/gemma-2-2b-it",
messages=messages,
max_tokens=500,
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
To use the Python client, see huggingface_hub
’s package reference.
Code snippet example for conversational VLMs
Using huggingface_hub
:
from huggingface_hub import InferenceClient
client = InferenceClient(api_key="hf_***")
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
stream = client.chat.completions.create(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
messages=messages,
max_tokens=500,
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
Using openai
:
from openai import OpenAI
client = OpenAI(
base_url="https://api-inference.huggingface.co/v1/",
api_key="hf_***"
)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
stream = client.chat.completions.create(
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
messages=messages,
max_tokens=500,
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content, end="")
To use the Python client, see huggingface_hub
’s package reference.
API specification
Request
Payload | ||
---|---|---|
frequency_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. |
logprobs | boolean | Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message. |
max_tokens | integer | The maximum number of tokens that can be generated in the chat completion. |
messages* | object[] | A list of messages comprising the conversation so far. |
content* | unknown | One of the following: |
(#1) | string | |
(#2) | object[] | |
(#1) | object | |
text* | string | |
type* | enum | Possible values: text. |
(#2) | object | |
image_url* | object | |
url* | string | |
type* | enum | Possible values: image_url. |
name | string | |
role* | string | |
presence_penalty | number | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics |
response_format | unknown | One of the following: |
(#1) | object | |
type* | enum | Possible values: json. |
value* | unknown | A string that represents a JSON Schema. JSON Schema is a declarative language that allows to annotate JSON documents with types and descriptions. |
(#2) | object | |
type* | enum | Possible values: regex. |
value* | string | |
seed | integer | |
stop | string[] | Up to 4 sequences where the API will stop generating further tokens. |
stream | boolean | |
stream_options | object | |
include_usage* | boolean | If set, an additional chunk will be streamed before the data: [DONE] message. The usage field on this chunk shows the token usage statistics for the entire request, and the choices field will always be an empty array. All other chunks will also include a usage field, but with a null value. |
temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both. |
tool_choice | unknown | One of the following: |
(#1) | enum | Possible values: auto. |
(#2) | enum | Possible values: none. |
(#3) | enum | Possible values: required. |
(#4) | object | |
function* | object | |
name* | string | |
tool_prompt | string | A prompt to be appended before the tools |
tools | object[] | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. |
function* | object | |
arguments* | unknown | |
description | string | |
name* | string | |
type* | string | |
top_logprobs | integer | An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. |
top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
Some options can be configured by passing headers to the Inference API. Here are the available headers:
Headers | ||
---|---|---|
authorization | string | Authentication header in the form 'Bearer: hf_****' when hf_**** is a personal user access token with Inference API permission. You can generate one from your settings page. |
x-use-cache | boolean, default to true | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching here. |
x-wait-for-model | boolean, default to false | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability here. |
For more information about Inference API headers, check out the parameters guide.
Response
Output type depends on the stream
input parameter.
If stream
is false
(default), the response will be a JSON object with the following fields:
Body | ||
---|---|---|
choices | object[] | |
finish_reason | string | |
index | integer | |
logprobs | object | |
content | object[] | |
logprob | number | |
token | string | |
top_logprobs | object[] | |
logprob | number | |
token | string | |
message | unknown | One of the following: |
(#1) | object | |
content | string | |
role | string | |
(#2) | object | |
role | string | |
tool_calls | object[] | |
function | object | |
arguments | unknown | |
description | string | |
name | string | |
id | string | |
type | string | |
created | integer | |
id | string | |
model | string | |
system_fingerprint | string | |
usage | object | |
completion_tokens | integer | |
prompt_tokens | integer | |
total_tokens | integer |
If stream
is true
, generated tokens are returned as a stream, using Server-Sent Events (SSE).
For more information about streaming, check out this guide.
Body | ||
---|---|---|
choices | object[] | |
delta | unknown | One of the following: |
(#1) | object | |
content | string | |
role | string | |
(#2) | object | |
role | string | |
tool_calls | object | |
function | object | |
arguments | string | |
name | string | |
id | string | |
index | integer | |
type | string | |
finish_reason | string | |
index | integer | |
logprobs | object | |
content | object[] | |
logprob | number | |
token | string | |
top_logprobs | object[] | |
logprob | number | |
token | string | |
created | integer | |
id | string | |
model | string | |
system_fingerprint | string | |
usage | object | |
completion_tokens | integer | |
prompt_tokens | integer | |
total_tokens | integer |