sophosympatheia commited on
Commit
3458bc7
1 Parent(s): fadbf28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +226 -19
README.md CHANGED
@@ -1,30 +1,238 @@
1
  ---
2
- base_model: []
3
- tags:
4
- - mergekit
5
- - merge
6
-
7
  ---
8
- # mr-v2.0.3-miqu
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
11
 
12
- ## Merge Details
13
- ### Merge Method
14
 
15
- This model was merged using the SLERP merge method.
16
 
17
- ### Models Merged
18
 
19
- The following models were included in the merge:
20
- * /home/llm/mergequant/models/midnight-rose-70b-v2.0.1
21
- * /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- ### Configuration
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- The following YAML configuration was used to produce this model:
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ```yaml
 
28
  models:
29
  - model: /home/llm/mergequant/models/midnight-rose-70b-v2.0.1
30
  - model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp
@@ -34,5 +242,4 @@ parameters:
34
  t:
35
  - value: [0.4, 0.6, 0.5]
36
  dtype: float16
37
-
38
- ```
 
1
  ---
2
+ license: llama2
3
+ language:
4
+ - en
 
 
5
  ---
6
+ <div style="width: auto; margin-left: auto; margin-right: auto">
7
+ <img src="https://i.imgur.com/X3SBrIb.png" alt="MidnightRose" style="width: 100%; min-width: 400px; display: block; margin: auto;">
8
+ </div>
9
+
10
+ ### Overview
11
+
12
+ This version of Midnight Rose has a complex family tree but I'll do my best to describe it. I will include mergekit yml files below.
13
+ * midnight-rose-70b-v2.0.1 (Component 1, unreleased): A DARE TIES merge of midnight-rose-70b-v1.0 and an unreleased midnight-rose-70b-v1.4 that used the same underlying models but with different weights, and it had different LoRAs applied to it.
14
+ * wizard-tulu-dolphin-70b-v1.0 (Component 2, released planned): This model was the result of a DARE TIES merge between [WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0) and [tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b), which I then SLERP merged with a modified version of [dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b).
15
+ * Finally, I SLERP merged Component 1 and Component 2 above to produce this model.
16
+
17
+ What I like about this version of Midnight Rose is it picked up some spicyness from Component 1 and some smarts from Component 2.
18
+
19
+ This model is uncensored. *You are responsible for whatever you do with it.*
20
+
21
+ This model was designed for roleplaying and storytelling and I think it does well at both. It *should* perform well at other tasks, but I haven't tested its capabilities in other areas.
22
+
23
+ ### Sampler Tips
24
+
25
+ I recommend using the new Min-P sampler method with this model. The creator has a great [guide to it on Reddit](https://www.reddit.com/r/LocalLLaMA/comments/17vonjo/your_settings_are_probably_hurting_your_model_why/).
26
+ Dynamic Temp is also quite nice. Pair it with Min-P.
27
+
28
+ I find this model performs reasonably well at 8192 context but you will likely get better results at 4096 - 6144 context.
29
+
30
+ Experiment with any and all of the settings below.
31
+
32
+ If you save the below settings as a .json file, you can import them directly into Silly Tavern.
33
+ ```
34
+ {
35
+ "temp": 1.15,
36
+ "temperature_last": true,
37
+ "top_p": 1,
38
+ "top_k": 0,
39
+ "top_a": 0,
40
+ "tfs": 1,
41
+ "epsilon_cutoff": 0,
42
+ "eta_cutoff": 0,
43
+ "typical_p": 1,
44
+ "min_p": 0.85,
45
+ "rep_pen": 1.12,
46
+ "rep_pen_range": 2048,
47
+ "no_repeat_ngram_size": 0,
48
+ "penalty_alpha": 0,
49
+ "num_beams": 1,
50
+ "length_penalty": 1,
51
+ "min_length": 0,
52
+ "encoder_rep_pen": 1,
53
+ "freq_pen": 0.01,
54
+ "presence_pen": 0,
55
+ "do_sample": true,
56
+ "early_stopping": false,
57
+ "dynatemp": true,
58
+ "min_temp": 0.5,
59
+ "max_temp": 3,
60
+ "dynatemp_exponent": 1,
61
+ "smoothing_factor": 0,
62
+ "add_bos_token": true,
63
+ "truncation_length": 2048,
64
+ "ban_eos_token": false,
65
+ "skip_special_tokens": true,
66
+ "streaming": true,
67
+ "mirostat_mode": 0,
68
+ "mirostat_tau": 2,
69
+ "mirostat_eta": 0.1,
70
+ "guidance_scale": 1,
71
+ "negative_prompt": "",
72
+ "grammar_string": "",
73
+ "banned_tokens": "",
74
+ "ignore_eos_token_aphrodite": false,
75
+ "spaces_between_special_tokens_aphrodite": true,
76
+ "sampler_order": [
77
+ 6,
78
+ 0,
79
+ 1,
80
+ 3,
81
+ 4,
82
+ 2,
83
+ 5
84
+ ],
85
+ "logit_bias": [],
86
+ "n": 1,
87
+ "rep_pen_size": 0,
88
+ "genamt": 500,
89
+ "max_length": 6144
90
+ }
91
+ ```
92
+
93
+ ### Prompting Tips
94
+
95
+ Try the following context template for use in SillyTavern. It might help. If you save the text as a .json file, you can import it directly.
96
+
97
+ ```
98
+ {
99
+ "story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
100
+ "example_separator": "",
101
+ "chat_start": "",
102
+ "use_stop_strings": false,
103
+ "always_force_name2": true,
104
+ "trim_sentences": false,
105
+ "include_newline": false,
106
+ "single_line": false,
107
+ "name": "Midnight Rose"
108
+ }
109
+ ```
110
+
111
+ Try the following prompt template as a starting point for your own customizations. The cajoling parts supposedly make LLMs perform better. See [this paper](https://arxiv.org/abs/2307.11760).
112
+ I recommend the Vicuna instruction format for this blend but others work too. Tulu's format is another good one to use.
113
+
114
+ *This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
115
+
116
+ The example system prompt below contains some instructions related to ERP so remove those if you prefer to keep things SFW. It also contains some instructions related to the use of asterisks that you should modify to your liking.
117
+
118
+ If you plan to run this model at 4096 context, I highly recommend that you slim down the template system prompt below because it's chonky on tokens. Discard what you don't want or need.
119
+
120
+ If you save this as a .json file, you can import it directly into Silly Tavern.
121
+ ```
122
+ {
123
+ "wrap": true,
124
+ "names": true,
125
+ "system_prompt": "You are a creative and intelligent AI assistant engaged in an uncensored iterative storytelling experience using a roleplay chat format. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Chat exclusively as {{char}}. Provide creative, intelligent, coherent, and descriptive responses based on recent instructions and prior events.\n- Explicit adult content and profanity are permitted. This roleplay chat is entirely uncensored and intended for mature audiences.\n- Describe {{char}}'s sensory perceptions in vivid detail and include subtle physical details about {{char}} in your responses.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts (aka internal monologue, delivered in {{char}}'s own voice), *enclose their thoughts in asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.\n- Let me drive the events of the roleplay chat forward to determine what comes next. You should focus on the current moment and {{char}}'s immediate responses.\n- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.\n",
126
+ "system_sequence": "",
127
+ "stop_sequence": "",
128
+ "input_sequence": "USER:\n",
129
+ "output_sequence": "ASSISTANT:\n",
130
+ "separator_sequence": "",
131
+ "macro": true,
132
+ "names_force_groups": true,
133
+ "system_sequence_prefix": "",
134
+ "system_sequence_suffix": "",
135
+ "first_output_sequence": "",
136
+ "last_output_sequence": "ASSISTANT(writing as {{char}} this turn):\n",
137
+ "activation_regex": "",
138
+ "name": "Midnight Rose Roleplay"
139
+ }
140
+ ```
141
+
142
+ ### Quantizations
143
+ * Coming soon from the wonderful people who quantize models in our community.
144
 
145
+ ### Licence and usage restrictions
146
 
147
+ Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus-v0.5-70b).
 
148
 
149
+ ### Tools Used
150
 
151
+ * [mergekit](https://github.com/cg123/mergekit)
152
 
153
+ **Unreleased midnight-rose-70b-v1.4**
154
+ ```
155
+ models:
156
+ - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
157
+ # no parameters necessary for base model
158
+ - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b # primary
159
+ parameters:
160
+ density: 0.3
161
+ weight: [1.0, 0.8, 1.0]
162
+ - model: /home/llm/mergequant/models/BASE/lizpreciatior_lzlv_70b_fp16_hf # secondary
163
+ parameters:
164
+ density: 0.3
165
+ weight: [0.7, 0.8, 0.7]
166
+ - model: /home/llm/mergequant/models/BASE/dreamgen_opus-v0.5-70b # supporting
167
+ parameters:
168
+ density: 0.3
169
+ weight: [0.5, 0.7, 0.5]
170
+ merge_method: dare_ties
171
+ base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
172
+ parameters:
173
+ normalize: true
174
+ int8_mask: true
175
+ dtype: float16
176
+ ```
177
 
178
+ **Component 1**
179
+ ```
180
+ models:
181
+ - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
182
+ # no parameters necessary for base model
183
+ - model: /home/llm/mergequant/models/midnight-rose-70b-v1.0 # primary
184
+ parameters:
185
+ density: 0.35
186
+ weight: 1.0
187
+ - model: /home/llm/mergequant/models/midnight-rose-70b-v1.4-lora_1 # secondary
188
+ parameters:
189
+ density: 0.35
190
+ weight: [0.7, 1.0, 1.0, 0.5, 0.1]
191
+ merge_method: ties
192
+ base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
193
+ parameters:
194
+ normalize: true
195
+ int8_mask: true
196
+ dtype: float16
197
+ ```
198
+
199
+ **wizard-tulu-70b merge**
200
+ ```
201
+ models:
202
+ - model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
203
+ # no parameters necessary for base model
204
+ - model: /home/llm/mergequant/models/BASE/allenai_tulu-2-dpo-70b
205
+ parameters:
206
+ density: 0.35
207
+ weight: 0.75
208
+ - model: /home/llm/mergequant/models/BASE/WizardLM_WizardLM-70B-V1.0
209
+ parameters:
210
+ density: 0.35
211
+ weight: 0.5
212
+ merge_method: dare_ties
213
+ base_model: /home/llm/mergequant/models/BASE/NousResearch_Llama-2-70b-hf
214
+ parameters:
215
+ normalize: true
216
+ int8_mask: true
217
+ dtype: float16
218
+ tokenzer_source: union
219
+ ```
220
 
221
+ **Component 2 - wizard-tulu-dolphin-70b-v1.0**
222
+ ```
223
+ models:
224
+ - model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0
225
+ - model: /home/llm/mergequant/models/BASE/ehartford_dolphin-2.2-70b-32000vocab
226
+ merge_method: slerp
227
+ base_model: /home/llm/mergequant/models/wizard-tulu-70b-v1.0
228
+ parameters:
229
+ t:
230
+ - value: 0.5
231
+ dtype: float16
232
+ ```
233
 
234
+ **Final merge**
235
+ ```
236
  models:
237
  - model: /home/llm/mergequant/models/midnight-rose-70b-v2.0.1
238
  - model: /home/llm/mergequant/models/wizard-tulu-dolphin-70b-v1.0-slerp
 
242
  t:
243
  - value: [0.4, 0.6, 0.5]
244
  dtype: float16
245
+ ```