--- base_model: - mistralai/Mistral-Nemo-Instruct-2407 - intervitens/mini-magnum-12b-v1.1 license: apache-2.0 library_name: transformers tags: - merge - roleplay - not-for-all-audiences --- # Magnum-Instruct-12B Simple della_linear merge done at a 50/50 split with high density using Mini-Magnum and Nemo Instruct. Nothing fancy to it, really. Seems good on both intelligence and creativity so far. Big thanks to the MistralAI and Anthracite/SillyTilly teams for the models used! GGUF quants made by mradermacher: https://huggingface.co/mradermacher/Magnum-Instruct-12B-GGUF ## Settings Temperature @ 0.7 Min-P @ 0.02 Smoothing Factor @ 0.3 Smoothing Curve @ 1.5 DRY Multiplier (plus standard DRY settings) @ 0.8 Skip Special Tokens @ On Everything else @ Off ### Prompt Format: Nemo-Mistral ``` [INST] user prompt[/INST] character response[INST] user prompt[/INST] ``` ### Models Merged The following models were included in the merge: https://huggingface.co/intervitens/mini-magnum-12b-v1.1 https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407