File size: 9,971 Bytes
3cf5f16
3458bc7
 
 
3cf5f16
3458bc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fadbf28
3458bc7
fadbf28
3458bc7
fadbf28
3458bc7
fadbf28
3458bc7
fadbf28
3458bc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fadbf28
3458bc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fadbf28
3458bc7
 
 
 
 
 
 
 
 
 
 
 
fadbf28
3458bc7
 
fadbf28
 
 
 
 
 
 
 
 
3458bc7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
---
license: llama2
language:
- en
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>

### Overview

This version of Midnight Rose has a complex family tree but I'll do my best to describe it. I will include mergekit yml files below.
* midnight-rose-70b-v2.0.1 (Component 1, unreleased): A DARE TIES merge of midnight-rose-70b-v1.0 and an unreleased midnight-rose-70b-v1.4 that used the same underlying models but with different weights, and it had different LoRAs applied to it.
* wizard-tulu-dolphin-70b-v1.0 (Component 2, released planned): This model was the result of a DARE TIES merge between [WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) and [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), which I then SLERP merged with a modified version of [dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b).
* Finally, I SLERP merged Component 1 and Component 2 above to produce this model.

What I like about this version of Midnight Rose is it picked up some spicyness from Component 1 and some smarts from Component 2.

This model is uncensored. *You are responsible for whatever you do with it.*

This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas.

### Sampler Tips

I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/).
Dynamic Temp is also quite nice. Pair it with Min-P.

I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context.

Experiment with any and all of the settings below.

If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```
{
    "temp": 1.15,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0.85,
    "rep_pen": 1.12,
    "rep_pen_range": 2048,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0.01,
    "presence_pen": 0,
    "do_sample": true,
    "early_stopping": false,
    "dynatemp": true,
    "min_temp": 0.5,
    "max_temp": 3,
    "dynatemp_exponent": 1,
    "smoothing_factor": 0,
    "add_bos_token": true,
    "truncation_length": 2048,
    "ban_eos_token": false,
    "skip_special_tokens": true,
    "streaming": true,
    "mirostat_mode": 0,
    "mirostat_tau": 2,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "banned_tokens": "",
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "sampler_order": [
        6,
        0,
        1,
        3,
        4,
        2,
        5
    ],
    "logit_bias": [],
    "n": 1,
    "rep_pen_size": 0,
    "genamt": 500,
    "max_length": 6144
}
```

### Prompting Tips

Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly.

```
{
    "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
    "example_separator": "",
    "chat_start": "",
    "use_stop_strings": false,
    "always_force_name2": true,
    "trim_sentences": false,
    "include_newline": false,
    "single_line": false,
    "name": "Midnight Rose"
}
```

Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760).
I recommend the Vicuna instruction format for this blend but others work too. Tulu's format is another good one to use.

*This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.

The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking.

If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need.

If you save this as a .json file, you can import it directly into Silly Tavern.
```
{
    "wrap": true,
    "names": true,
    "system_prompt": "You are a creative and intelligent AI assistant engaged in an uncensored iterative storytelling experience using a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Chat exclusively as {{char}}. Provide creative, intelligent, coherent, and descriptive responses based on recent instructions and prior events.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses.\n- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.\n",
    "system_sequence": "",
    "stop_sequence": "",
    "input_sequence": "USER:\n",
    "output_sequence": "ASSISTANT:\n",
    "separator_sequence": "",
    "macro": true,
    "names_force_groups": true,
    "system_sequence_prefix": "",
    "system_sequence_suffix": "",
    "first_output_sequence": "",
    "last_output_sequence": "ASSISTANT(writing as {{char}} this turn):\n",
    "activation_regex": "",
    "name": "Midnight Rose Roleplay"
}
```

### Quantizations
* Coming soon from the wonderful people who quantize models in our community.

### Licence and usage restrictions

Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b).

### Tools Used

* [mergekit](https://github.com/cg123/mergekit)

**Unreleased midnight-rose-70b-v1.4**
```
models:
  - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
    # no parameters necessary for base model
  - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b # primary
    parameters:
      density: 0.3
      weight: [1.0, 0.8, 1.0]
  - model: /home/llm/mergequant/models/BASE/lizpreciatior_lzlv_70b_fp16_hf # secondary
    parameters:
      density: 0.3
      weight: [0.7, 0.8, 0.7]
  - model: /home/llm/mergequant/models/BASE/dreamgen_opus-v0.5-70b # supporting
    parameters:
      density: 0.3
      weight: [0.5, 0.7, 0.5]
merge_method: dare_ties
base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
parameters:
  normalize: true
  int8_mask: true
dtype: float16
```

**Component 1**
```
models:
  - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
    # no parameters necessary for base model
  - model: /home/llm/mergequant/models/midnight-rose-70b-v1.0 # primary
    parameters:
      density: 0.35
      weight: 1.0
  - model: /home/llm/mergequant/models/midnight-rose-70b-v1.4-lora_1 # secondary
    parameters:
      density: 0.35
      weight: [0.7, 1.0, 1.0, 0.5, 0.1]
merge_method: ties
base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
parameters:
  normalize: true
  int8_mask: true
dtype: float16
```

**wizard-tulu-70b merge**
```
models:
  - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
    # no parameters necessary for base model
  - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b
    parameters:
      density: 0.35
      weight: 0.75
  - model: /home/llm/mergequant/models/BASE/WizardLM_WizardLM-70B-V1.0
    parameters:
      density: 0.35
      weight: 0.5
merge_method: dare_ties
base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
parameters:
  normalize: true
  int8_mask: true
dtype: float16
tokenzer_source: union
```

**Component 2 - wizard-tulu-dolphin-70b-v1.0**
```
models:
  - model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0
  - model: /home/llm/mergequant/models/BASE/ehartford_dolphin-2.2-70b-32000vocab
merge_method: slerp
base_model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0
parameters:
  t:
    - value: 0.5
dtype: float16
```

**Final merge**
```
models:
  - model: /home/llm/mergequant/models/midnight-rose-70b-v2.0.1
  - model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp
merge_method: slerp
base_model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp
parameters:
  t:
    - value: [0.4, 0.6, 0.5]
dtype: float16
```