Edit model card

Magnum-Instruct-12B

Simple della_linear merge done at a 50/50 split with high density using Mini-Magnum and Nemo Instruct. Nothing fancy to it, really. Seems good on both intelligence and creativity so far.

Big thanks to the MistralAI and Anthracite/SillyTilly teams for the models used!

GGUF quants made by mradermacher:

https://huggingface.co/mradermacher/Magnum-Instruct-12B-GGUF

Settings

Temperature @ 0.7

Min-P @ 0.02

Smoothing Factor @ 0.3

Smoothing Curve @ 1.5

DRY Multiplier (plus standard DRY settings) @ 0.8

Skip Special Tokens @ On

Everything else @ Off

Prompt Format: Nemo-Mistral

[INST] user prompt[/INST] character response</s>[INST] user prompt[/INST]

Models Merged

The following models were included in the merge:

https://huggingface.co/intervitens/mini-magnum-12b-v1.1

https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407

Downloads last month
107
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for ParasiticRogue/Magnum-Instruct-12B