File size: 2,734 Bytes
6269553
 
 
 
 
 
 
 
 
 
 
41ec4e6
 
 
 
6269553
 
41ec4e6
 
 
 
 
6269553
 
 
 
 
 
41ec4e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6269553
41ec4e6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- Salesforce/xlam-function-calling-60k
pipeline_tag: text-generation
library_name: peft
---

# Model Card for Model ID

This model is a function calling version of [microsoft/phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) finetuned on the [Salesforce/xlam-function-calling-60k](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) dataset. 


# Uploaded  model

- **Developed by:** akshayballal
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit

### Usage 

```python
from unsloth import FastLanguageModel

max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "outputs/checkpoint-3000", # YOUR MODEL YOU USED FOR TRAINING
    max_seq_length = 1024,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

tools = [
    {
        "name": "upcoming",
        "description": "Fetches upcoming CS:GO matches data from the specified API endpoint.",
        "parameters": {
            "content_type": {
                "description": "The content type for the request, default is 'application/json'.",
                "type": "str",
                "default": "application/json",
            },
            "page": {
                "description": "The page number to retrieve, default is 1.",
                "type": "int",
                "default": "1",
            },
            "limit": {
                "description": "The number of matches to retrieve per page, default is 10.",
                "type": "int",
                "default": "10",
            },
        },
    }
]
messages = [
    {
        "role": "user",
        "content": f"You are a helpful assistant. Below are the tools that you have access to.  \n\n### Tools: \n{tools} \n\n### Query: \n{query} \n",
    },
]

input = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)

output = model.generate(
    input_ids=input, max_new_tokens=512, temperature=0.0
)

decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
```


[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)