|
--- |
|
datasets: |
|
- glaiveai/glaive-function-calling-v2 |
|
--- |
|
|
|
|
|
## Tool Information |
|
|
|
Define the tools and their functionalities as a list of dictionaries. |
|
|
|
```python |
|
tools_info = [ |
|
{ |
|
"name": "cancel_reservation", |
|
"description": "cancel a reservation", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"reservation_number": { |
|
"type": "integer", |
|
"description": "Reservation number" |
|
} |
|
}, |
|
"required": ["reservation_number"] |
|
} |
|
}, |
|
{ |
|
"name": "get_reservations", |
|
"description": "get reservation numbers", |
|
"parameters": { |
|
"type": "object", |
|
"properties": { |
|
"user_id": { |
|
"type": "integer", |
|
"description": "User id" |
|
} |
|
}, |
|
"required": [ |
|
"user_id" |
|
] |
|
} |
|
}, |
|
] |
|
``` |
|
|
|
## System Initialization |
|
|
|
Initialize the system's interactive capabilities using the defined tools. |
|
|
|
```python |
|
system = f"You are a helpful assistant with access to the following functions: \n {json.dumps(tools_info, indent=2)}." |
|
``` |
|
|
|
## Conversation Flow |
|
|
|
Simulate a conversation flow where the user requests to cancel a reservation. |
|
|
|
```python |
|
messages = [ |
|
{"role": "system", "content": system}, |
|
{"role": "user", "content": "Help me to cancel a reservation"}, |
|
{"role": "assistant", "content": "I can help with that. Could you please provide me with the reservation number?"}, |
|
{"role": "user", "content": "the reservation number is 1011"} |
|
] |
|
``` |
|
|
|
Or the user requests to display its reservations, note the use of "tool" role. |
|
|
|
```python |
|
messages=[ |
|
{"role":"system","content": system}, |
|
{"role": "user","content": "Help me to find my reservations, my user id is 110"}, |
|
{"role": "assistant","content":'<func_call> {"name": "get_reservations", "arguments": {"user_id": 110}}'}, |
|
{"role": "tool","content":'["AB001","CD002","GG100"]'} |
|
] |
|
``` |
|
|
|
|
|
## Model Loading |
|
|
|
Load the causal language model and tokenizer. |
|
|
|
```python |
|
model_id = "caldana/function_calling_llama3_8b_instruct" |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
``` |
|
|
|
## Generating Response |
|
|
|
Generate a response from the model based on the conversation context. |
|
|
|
```python |
|
input_ids = tokenizer.apply_chat_template( |
|
messages, |
|
add_generation_prompt=True, |
|
return_tensors="pt" |
|
).to(model.device) |
|
|
|
terminators = [ |
|
tokenizer.eos_token_id, |
|
tokenizer.convert_tokens_to_ids("") |
|
] |
|
|
|
outputs = model.generate( |
|
input_ids, |
|
max_new_tokens=256, |
|
eos_token_id=terminators, |
|
do_sample=True, |
|
temperature=0.6, |
|
top_p=0.9, |
|
) |
|
|
|
response = outputs[0][input_ids.shape[-1]:] |
|
print(tokenizer.decode(response, skip_special_tokens=True)) |
|
``` |
|
|
|
|