Text Generation
Transformers
Safetensors
mistral
conversational
text-generation-inference
Inference Endpoints
Edit model card

image/png

This model was trained on our MetamathFewshot dataset, as well as the Vicuna dataset and the OrcaChat dataset.

It has been finetuned from base Mistral 7B

Usage

This model uses a specific prompt format which is encoded as a chat template. To apply this, you can use the tokenizer.apply_chat_template() method of the attached tokenizer:

messages = [
    {"role": "user", "content": "What is the capital of Spain?"},
    {"role": "assistant", "content": "The capital of Spain is Madrid."}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)

Evaluation Results

HuggingFace Leaderboard

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
67.33 59.64 81.82 61.69 53.23 78.45 69.14

For comparison the GSM8K score for the original metamath/MetaMath-Mistral-7B was 68.84 and average score was 65.78.

MT-Bench

Turn 1 Turn 2 Average
6.90 6.52 6.71

Training Details

Instruction tuned with the following parameters:

  • LORA, Rank 8, Alpha 16, Dropout 0.05, all modules (QKV and MLP)
  • 3 epochs
  • Micro Batch Size 32 over 4xH100, gradient accumulation steps = 1
  • AdamW with learning rate 5e-5

Bias, Risks, and Limitations

The model has not been evaluated for safety and is only intended for research and experiments.

Downloads last month
15
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for abacusai/Fewshot-Metamath-OrcaVicuna-Mistral

Finetuned
(695)
this model

Datasets used to train abacusai/Fewshot-Metamath-OrcaVicuna-Mistral