akshayballal's picture
Update README.md
41ec4e6 verified
metadata
base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
datasets:
  - Salesforce/xlam-function-calling-60k
pipeline_tag: text-generation
library_name: peft

Model Card for Model ID

This model is a function calling version of microsoft/phi-3.5-mini-instruct finetuned on the Salesforce/xlam-function-calling-60k dataset.

Uploaded model

  • Developed by: akshayballal
  • License: apache-2.0
  • Finetuned from model : unsloth/phi-3.5-mini-instruct-bnb-4bit

Usage

from unsloth import FastLanguageModel

max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "outputs/checkpoint-3000", # YOUR MODEL YOU USED FOR TRAINING
    max_seq_length = 1024,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

tools = [
    {
        "name": "upcoming",
        "description": "Fetches upcoming CS:GO matches data from the specified API endpoint.",
        "parameters": {
            "content_type": {
                "description": "The content type for the request, default is 'application/json'.",
                "type": "str",
                "default": "application/json",
            },
            "page": {
                "description": "The page number to retrieve, default is 1.",
                "type": "int",
                "default": "1",
            },
            "limit": {
                "description": "The number of matches to retrieve per page, default is 10.",
                "type": "int",
                "default": "10",
            },
        },
    }
]
messages = [
    {
        "role": "user",
        "content": f"You are a helpful assistant. Below are the tools that you have access to.  \n\n### Tools: \n{tools} \n\n### Query: \n{query} \n",
    },
]

input = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)

output = model.generate(
    input_ids=input, max_new_tokens=512, temperature=0.0
)

decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)