Edit model card

Introduction

The model is trained with Masked Thought Fine-Tuning (MFT), a simple variant of standard Supervised Fine-Tuning (SFT). You can refer to our code and paper below.

Links

Results

We test it with the scripts provided in MetaMath.

Model GSM8K MATH
adalaw/MetaMath-Mistral-7B-MFT 79.90 29.0
meta-math/MetaMath-Mistral-7B-SFT 77.70 28.2
Downloads last month
1
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train adalaw/MetaMath-Mistral-7B-MFT