Edit model card

Mathmate-7B-DELLA-ORPO-D

Mathmate-7B-DELLA-ORPO-D is a finetuned version of Haleshot/Mathmate-7B-DELLA-ORPO using the ORPO method, combined with a LoRA adapter trained on everyday conversations.

Model Details

Dataset

The model incorporates training on the HuggingFaceTB/everyday-conversations-llama3.1-2k dataset, which focuses on everyday conversations and small talk.

Usage

Here's an example of how to use the model:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "Haleshot/Mathmate-7B-DELLA-ORPO-ORPO-D"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")

def generate_response(prompt, max_length=512):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    outputs = model.generate(**inputs, max_length=max_length, num_return_sequences=1, do_sample=True, temperature=0.7)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

prompt = "Let's have a casual conversation about weekend plans."
response = generate_response(prompt)
print(response)

Acknowledgements

Thanks to the HuggingFaceTB team for providing the everyday conversations dataset used in this finetuning process.

Downloads last month
13
Safetensors
Model size
6.89B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Haleshot/Mathmate-7B-DELLA-ORPO-D

Finetuned
(2)
this model

Dataset used to train Haleshot/Mathmate-7B-DELLA-ORPO-D

Collection including Haleshot/Mathmate-7B-DELLA-ORPO-D