🧪 Just Another Model Experiment
This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
Mistral-Nemo-Moderne-12B-FFT-experimental
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on gutenberg2-dpo and gutenberg-moderne-dpo.
This model has erratic behavior and poor performance
Method
ORPO tuned with 8x A100 for 1.5 epochs.
This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.