Edit model card

Uploaded model

  • Developed by: Trappu
  • License: apache-2.0
  • Finetuned from model : royallab/MN-LooseCannon-12B-v2

Details

This model was trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: [ Scenario: bla bla bla; Tags: bla bla bla ]),

I don't really recommend this model due to its nature and obvious flaws (rampant impersonation, stupid, etc...). It's a a one-trick pony and will be really rough for the average LLM user to handle.

Instead, I recommend you guys use Magnum-Picaro-0.7-v2-12b. The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data in Picaro's dataset. It worked, I think. I enjoy the outputs and it's smart enough to work with.

Prompting

If for some reason, you still want to try this model over Magnum-Picaro, it was trained on chatml with no system prompts, so below is the recommended prompt formatting.

<|im_start|>user
bla bla bla<|im_end|>
<|im_start|>assistant
bla bla bla you!<|im_end|>

For SillyTavern users:

Instruct template

Context template

Settings preset

The above settings are the ones I recommend.

Temp = 1.2

Min P = 0.1

DRY Rep Pen: Multiplier = 0.8, Base = 1.75, Allowed Length = 2, Penalty Range = 1024

Little guide on useful samplers and how to import settings presets and instruct/context templates and other stuff people might find useful here

Every other sampler neutralized.

Downloads last month
13
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Trappu/Nemo-Picaro-12B

Finetuned
(2)
this model
Merges
3 models

Collection including Trappu/Nemo-Picaro-12B