Uploaded model
- Developed by: ricardo-larosa
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Techniques used
- Quantization: They provide 4-bit quantized models which are 4x faster to download and use 4x less memory (I observed that the reduction of precision did not affect too much the performance of the model).
- Lower Ranking Adaptation: They provide LoRA adapters which allow to only update 1 to 10% of all parameters.
- Rotary Positional Embedding Scaling: They support RoPE Scaling internally instead of traditional positional embeddings.
Performance
I did not see any OOMs and the memory usage was steady at 10GB on a A100 GPU (I could've easily used a V100). Aditional to this performance optimizations, I spend some time tweaking the parameters of the Supervised Fine-tuning Trainer (SFTTrainer) from the TRL library.
Prompting
Finally, the prompt template is a simple alpaca-like template of fields: instruction, english_sentence and logical_form. The same template is used for training and inference.
Model tree for ricardo-larosa/recogs-mistral-7b-instruct-v0.2-bnb-4bit
Base model
unsloth/mistral-7b-instruct-v0.2-bnb-4bit