Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA
Fine-tune for three epochs
Result:
Reload the saved adapter: Invalid output length: 4, Testing length: 1319, Accuracy: 0.641
Comparison
The official report accuracy is 0.665 by fine-tuning the whole LLaMA 2 7B model for 3 epochs.
Note: The LoRA adapter is being used for future research purposes.
Deployment
# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/metamath_lora_llama2_7b_3_epoch")
model.enable_adapters()
Evaluation results
- Accuracy (zero-shot) on meta-math/MetaMathQAArithmetic Reasoning on GSM8K0.641