--- library_name: transformers tags: - trl - orpo license: apache-2.0 datasets: - nbeerbower/gutenberg2-dpo - nbeerbower/gutenberg-moderne-dpo base_model: - nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated --- ![image/png](https://huggingface.co/nbeerbower/MN-Moderne-EXPERIMENT/resolve/main/moderne-fft-cover.png?download=true) > 🧪 **Just Another Model Experiment** > > This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready! # Mistral-Nemo-Moderne-12B-FFT-experimental [Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo) and [gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo). **This model has erratic behavior and poor performance** ### Method [ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 1.5 epochs. This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.