Edit model card

Simple Use Case

This section demonstrates a simple use case of how to interact with our model to solve problems in a step-by-step, friendly manner.

Define the Function

We define a function get_completion which takes user input, combines it with a predefined system prompt, and then sends this combined prompt to our model. The model's response is then printed out.

Here's how the function is implemented:

import torch                
from transformers import pipeline   
import os

# Load model
test_pipeline = pipeline(model="zaursamedov1/FIxtral",
                            torch_dtype=torch.bfloat16,
                            trust_remote_code=True,
                            device_map="auto")

### Define the function
def get_completion(input):
    system = "Think step by step and solve the problem in a friendly way."
    prompt = f"#### System: {system}\\n#### User: \\n{input}\\n\\n#### Response from FIxtral model:"
    print(prompt)
    fixtral_prompt = test_pipeline(prompt, max_new_tokens=500)
    return fixtral_prompt[0]["generated_text"]

# Let's prompt
prompt = "problem"
print(get_completion(prompt))
Downloads last month
7
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.