Magnum-Instruct-12B
Simple della_linear merge done at a 50/50 split with high density using Mini-Magnum and Nemo Instruct. Nothing fancy to it, really. Seems good on both intelligence and creativity so far.
Big thanks to the MistralAI and Anthracite/SillyTilly teams for the models used!
GGUF quants made by mradermacher:
https://huggingface.co/mradermacher/Magnum-Instruct-12B-GGUF
Settings
Temperature @ 0.7
Min-P @ 0.02
Smoothing Factor @ 0.3
Smoothing Curve @ 1.5
DRY Multiplier (plus standard DRY settings) @ 0.8
Skip Special Tokens @ On
Everything else @ Off
Prompt Format: Nemo-Mistral
[INST] user prompt[/INST] character response</s>[INST] user prompt[/INST]
Models Merged
The following models were included in the merge:
- Downloads last month
- 107
Inference API (serverless) is not available, repository is disabled.
Model tree for ParasiticRogue/Magnum-Instruct-12B
Merge model
this model