File size: 1,415 Bytes
1fc28b8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: apache-2.0
language:
- en
---

<div align="center">

# Mini-Magnum-Unboxed-12B-GGUF
</div>

This is the GGUF quantization of https://huggingface.co/concedo/Mini-Magnum-Unboxed-12B, which was originally finetuned on top of the https://huggingface.co/intervitens/mini-magnum-12b-v1.1 model to correct some minor personal annoyances towards what would otherwise be an excellent model.

You can use [KoboldCpp](https://github.com/LostRuins/koboldcpp/releases/latest) to run this model.

- **Instruct prompt format changed to Alpaca** - Honestly, I don't know why more models don't use it. If you are an Alpaca format lover like me, this should help.
- **Instruct Decensoring Applied** - You should not need a jailbreak for a model to obey the user. The model should always do what you tell it to. No need for weird `"Sure, I will"` or kitten-murdering-threat tricks.
- **Short Conversation Tuning** - For people who like to also be able to *chat* (think chatbot/DM) with a character rather than just roleplay with it. This adds a small dataset of short chat-message conversations.

<!-- prompt-template start -->
## Prompt template: Alpaca

```
### Instruction:
{prompt}

### Response:
```

<!-- prompt-template end -->

Please leave any feedback or issues that you may have. All credits go to the tuners of the original source mini-magnum-12b-v1.1 model as well as Mistral for the Mistral Nemo base model.