Triangle104 commited on
Commit
126e30e
1 Parent(s): 798f9a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md CHANGED
@@ -110,6 +110,208 @@ model-index:
110
  This model was converted to GGUF format from [`anthracite-org/magnum-v4-27b`](https://huggingface.co/anthracite-org/magnum-v4-27b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
111
  Refer to the [original model card](https://huggingface.co/anthracite-org/magnum-v4-27b) for more details on the model.
112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  ## Use with llama.cpp
114
  Install llama.cpp through brew (works on Mac and Linux)
115
 
 
110
  This model was converted to GGUF format from [`anthracite-org/magnum-v4-27b`](https://huggingface.co/anthracite-org/magnum-v4-27b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
111
  Refer to the [original model card](https://huggingface.co/anthracite-org/magnum-v4-27b) for more details on the model.
112
 
113
+ ---
114
+ Model details:
115
+ -
116
+ This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
117
+
118
+ This model is fine-tuned on top of Gemma 27b (chatML'ified).
119
+
120
+ Prompting
121
+ -
122
+ A typical input would look like this:
123
+
124
+ <|im_start|>system
125
+ system prompt<|im_end|>
126
+ <|im_start|>user
127
+ Hi there!<|im_end|>
128
+ <|im_start|>assistant
129
+ Nice to meet you!<|im_end|>
130
+ <|im_start|>user
131
+ Can I ask a question?<|im_end|>
132
+ <|im_start|>assistant
133
+
134
+ SillyTavern templates
135
+ -
136
+ Below are Instruct and Context templates for use within SillyTavern.
137
+
138
+ context template
139
+ -
140
+ {
141
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
142
+ "example_separator": "",
143
+ "chat_start": "",
144
+ "use_stop_strings": false,
145
+ "allow_jailbreak": false,
146
+ "always_force_name2": true,
147
+ "trim_sentences": false,
148
+ "include_newline": false,
149
+ "single_line": false,
150
+ "name": "Magnum ChatML"
151
+ }
152
+
153
+
154
+ instruct template
155
+ -
156
+ {
157
+ "system_prompt": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as "!" and "~" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
158
+ "input_sequence": "<|im_start|>user\n",
159
+ "output_sequence": "<|im_start|>assistant\n",
160
+ "last_output_sequence": "",
161
+ "system_sequence": "<|im_start|>system\n",
162
+ "stop_sequence": "<|im_end|>",
163
+ "wrap": false,
164
+ "macro": true,
165
+ "names": true,
166
+ "names_force_groups": true,
167
+ "activation_regex": "",
168
+ "system_sequence_prefix": "",
169
+ "system_sequence_suffix": "",
170
+ "first_output_sequence": "",
171
+ "skip_examples": false,
172
+ "output_suffix": "<|im_end|>\n",
173
+ "input_suffix": "<|im_end|>\n",
174
+ "system_suffix": "<|im_end|>\n",
175
+ "user_alignment_message": "",
176
+ "system_same_as_user": false,
177
+ "last_system_sequence": "",
178
+ "name": "Magnum ChatML"
179
+ }
180
+
181
+ Axolotl config
182
+ -
183
+ base_model: IntervitensInc/gemma-2-27b-chatml
184
+ model_type: AutoModelForCausalLM
185
+ tokenizer_type: AutoTokenizer
186
+
187
+ hub_model_id: anthracite-org/magnum-v4-27b-r1
188
+ hub_strategy: "all_checkpoints"
189
+ push_dataset_to_hub:
190
+ hf_use_auth_token: true
191
+
192
+ plugins:
193
+ - axolotl.integrations.liger.LigerPlugin
194
+ liger_cross_entropy: true
195
+ #liger_rope: true
196
+ #liger_rms_norm: true
197
+ #liger_swiglu: true
198
+ #liger_fused_linear_cross_entropy: true
199
+
200
+ load_in_8bit: false
201
+ load_in_4bit: false
202
+ strict: false
203
+
204
+ datasets:
205
+ - path: anthracite-org/c2_logs_16k_llama_v1.1
206
+ type: sharegpt
207
+ conversation: chatml
208
+ - path: NewEden/Claude-Instruct-5K
209
+ type: sharegpt
210
+ conversation: chatml
211
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
212
+ type: sharegpt
213
+ conversation: chatml
214
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
215
+ type: sharegpt
216
+ conversation: chatml
217
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
218
+ type: sharegpt
219
+ conversation: chatml
220
+ - path: anthracite-org/nopm_claude_writing_fixed
221
+ type: sharegpt
222
+ conversation: chatml
223
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
224
+ type: sharegpt
225
+ conversation: chatml
226
+ - path: anthracite-org/kalo_opus_misc_240827
227
+ type: sharegpt
228
+ conversation: chatml
229
+ - path: anthracite-org/kalo_misc_part2
230
+ type: sharegpt
231
+ conversation: chatml
232
+ chat_template: chatml
233
+ shuffle_merged_datasets: true
234
+ default_system_message: "You are an assistant that responds to the user."
235
+ dataset_prepared_path: /workspace/data/27-fft-data
236
+ val_set_size: 0.0
237
+ output_dir: /workspace/data/27b-fft-out
238
+
239
+ sequence_len: 8192
240
+ sample_packing: true
241
+ eval_sample_packing: false
242
+ pad_to_sequence_len: true
243
+
244
+ adapter:
245
+ lora_model_dir:
246
+ lora_r:
247
+ lora_alpha:
248
+ lora_dropout:
249
+ lora_target_linear:
250
+ lora_fan_in_fan_out:
251
+
252
+ wandb_project: 27b-nemo-config-fft
253
+ wandb_entity:
254
+ wandb_watch:
255
+ wandb_name: attempt-01
256
+ wandb_log_model:
257
+
258
+ gradient_accumulation_steps: 8
259
+ micro_batch_size: 1
260
+ num_epochs: 4
261
+ optimizer: paged_adamw_8bit
262
+ lr_scheduler: cosine
263
+ learning_rate: 0.00001
264
+
265
+ train_on_inputs: false
266
+ group_by_length: false
267
+ bf16: auto
268
+ fp16:
269
+ tf32: false
270
+
271
+ gradient_checkpointing: true
272
+ early_stopping_patience:
273
+ auto_resume_from_checkpoints: true
274
+ local_rank:
275
+ logging_steps: 1
276
+ xformers_attention:
277
+ flash_attention: true
278
+
279
+ warmup_steps: 10
280
+ evals_per_epoch:
281
+ eval_table_size:
282
+ eval_max_new_tokens:
283
+ saves_per_epoch: 2
284
+ debug:
285
+ deepspeed: deepspeed_configs/zero3_bf16.json
286
+ weight_decay: 0.01
287
+ fsdp:
288
+ fsdp_config:
289
+ special_tokens:
290
+ pad_token: <pad>
291
+
292
+
293
+ Credits
294
+ -
295
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
296
+
297
+ We would also like to thank all members of Anthracite who made this finetune possible.
298
+ Datasets
299
+
300
+ anthracite-org/c2_logs_16k_llama_v1.1
301
+ NewEden/Claude-Instruct-5K
302
+ anthracite-org/kalo-opus-instruct-22k-no-refusal
303
+ Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
304
+ lodrick-the-lafted/kalo-opus-instruct-3k-filtered
305
+ anthracite-org/nopm_claude_writing_fixed
306
+ Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
307
+ anthracite-org/kalo_opus_misc_240827
308
+ anthracite-org/kalo_misc_part2
309
+
310
+ Training
311
+ -
312
+ The training was done for 2 epochs. We used 8xH100s GPUs graciously provided by Recursal AI / Featherless AI for the full-parameter fine-tuning of the model.
313
+
314
+ ---
315
  ## Use with llama.cpp
316
  Install llama.cpp through brew (works on Mac and Linux)
317