GGUF
chat
Inference Endpoints
conversational
Delta-Vector commited on
Commit
e704b80
1 Parent(s): a227310

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +227 -0
README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: agpl-3.0
3
+ tags:
4
+ - chat
5
+ datasets:
6
+ - NewEden/OpenCAI-ShareGPT
7
+ - NewEden/Roleplay-Logs-Sharegpt-Ngram-cleaned
8
+ License: agpl-3.0
9
+ Language:
10
+ - En
11
+ Pipeline_tag: text-generation
12
+ Base_model: arcee-ai/Llama-3.1-SuperNova-Lite
13
+ Tags:
14
+ - Chat
15
+ ---
16
+
17
+ ---
18
+ ### these are GGUF quants for exl2 / FP16 - Go to the links below
19
+ ---
20
+
21
+ An experimental finetune based on the Llama3.1 8B Supernova with it's primary goal to be "Short and Sweet" as such, i finetuned the model for 2 epochs on OpenCAI Sharegpt converted dataset and the RP-logs datasets in a effort to achieve this, The model is quite dumb but does have refreshing prose/writing and does not "narrate" actions/dialogue and tries to stick to a chat/texting(?) format.
22
+
23
+ # Quants
24
+
25
+ GGUF: https://huggingface.co/Delta-Vector/Control-8B-gguf
26
+
27
+ EXL2 (Thanks Lucy <3) : https://huggingface.co/Delta-Vector/Control-8B-EXL2
28
+
29
+
30
+ ## Prompting
31
+ Model has been tuned with the LLama-Instruct formatting. A typical input would look like this:
32
+
33
+ ```py
34
+ """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
35
+ You are an AI built to rid the world of bonds and journeys!<|eot_id|><|start_header_id|>user<|end_header_id|>
36
+ Bro i just wanna know what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
37
+ """
38
+
39
+ ```
40
+
41
+ *Also note that ChatML may work as well, and might change how the model feels. while still being coherent and stable*
42
+
43
+ ## System Prompting
44
+
45
+ I would highly recommend using either Euryale's system prompt or the EVA system prompt with the model.
46
+
47
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
48
+
49
+ ```
50
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
51
+ <Guidelines>
52
+ • Maintain the character persona but allow it to evolve with the story.
53
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
54
+ • All types of outputs are encouraged; respond accordingly to the narrative.
55
+ • Include dialogues, actions, and thoughts in each response.
56
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
57
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
58
+ • Incorporate onomatopoeia when suitable.
59
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
60
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
61
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
62
+ </Guidelines>
63
+
64
+ <Forbidden>
65
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
66
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
67
+ • Repetitive and monotonous outputs.
68
+ • Positivity bias in your replies.
69
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
70
+ </Forbidden>
71
+
72
+ Follow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.
73
+
74
+ ```
75
+ </details><br>
76
+
77
+ <details><summary>See EVA System Prompt</summary>
78
+
79
+ ```
80
+ A fictional, narrative-driven role-play emphasizing versatility and UNCENSORED content. Adherence to the `Role-playing Guidelines` is mandatory. Refer to the `Role-play Context` for accurate information.\n\n\n
81
+
82
+ <!-- Start of Role-playing Guidelines -->
83
+
84
+ ### Narration
85
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
86
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
87
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
88
+
89
+ ### Narrative Consistency
90
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
91
+
92
+ ### Character Embodiment
93
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
94
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
95
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
96
+
97
+ <!-- End of Role-playing Guidelines -->
98
+
99
+ </details><br>
100
+
101
+ ### Narration
102
+ Concise Descriptions: Keep narration short and to the point, avoiding redundant unnecessary details. Use a dynamic and varied vocabulary for impact.
103
+ Complementary Role: Use narration to complement dialogue and action, not overshadow them.
104
+ Avoid Repetition: Ensure narration does not repeat information already conveyed through dialogue or action.
105
+
106
+ ### Narrative Consistency
107
+ Continuity: Adhere to established story elements, expanding without contradicting previous details.\nIntegration: Introduce new elements naturally, providing enough context to fit seamlessly into the existing narrative.
108
+
109
+ ### Character Embodiment
110
+ Analysis: Examine the context, subtext, and implications of the given information to gain a deeper understandings of the characters'.
111
+ Reflection: Take time to consider the situation, characters' motivations, and potential consequences.
112
+ Authentic Portrayal: Bring characters to life by consistently and realistically portraying their unique traits, thoughts, emotions, appearances, physical sensations, speech patterns, and tone. Ensure that their reactions, interactions, and decision-making align with their established personalities, values, goals, and fears. Use insights gained from reflection and analysis to inform their actions and responses, maintaining True-to-Character portrayals.
113
+
114
+ <!-- End of Role-playing Guidelines -->",
115
+ ```
116
+ </details><br>
117
+
118
+ ## Axolotl config
119
+
120
+ <details><summary>See axolotl config</summary>
121
+
122
+ Axolotl version: `0.4.1`
123
+ ```yaml
124
+ base_model: arcee-ai/Llama-3.1-SuperNova-Lite
125
+ model_type: AutoModelForCausalLM
126
+ tokenizer_type: AutoTokenizer
127
+
128
+ load_in_8bit: false
129
+ load_in_4bit: false
130
+ strict: false
131
+
132
+ datasets:
133
+ - path: NewEden/CharacterAI-logs-sharegpt-Ngram-Cleaned
134
+ type: sharegpt
135
+ conversation: llama3
136
+ - path: NewEden/OpenCAI-ShareGPT
137
+ type: sharegpt
138
+ conversation: llama3
139
+
140
+
141
+ chat_template: llama3
142
+
143
+ #val_set_size: 0.01
144
+ output_dir: ./outputs
145
+
146
+ adapter:
147
+ lora_r:
148
+ lora_alpha:
149
+ lora_dropout:
150
+ lora_target_linear:
151
+
152
+ sequence_len: 16384
153
+ # sequence_len: 32768
154
+ sample_packing: true
155
+ eval_sample_packing: false
156
+ pad_to_sequence_len: true
157
+
158
+
159
+ wandb_project: CAI-Supernova
160
+ wandb_entity:
161
+ wandb_watch:
162
+ wandb_name: CAI-Supernova-2
163
+ wandb_log_model:
164
+
165
+
166
+ plugins:
167
+ - axolotl.integrations.liger.LigerPlugin
168
+ liger_rope: true
169
+ liger_rms_norm: true
170
+ liger_swiglu: true
171
+ liger_fused_linear_cross_entropy: true
172
+
173
+ gradient_accumulation_steps: 2
174
+ micro_batch_size: 1
175
+ num_epochs: 4
176
+ optimizer: paged_adamw_8bit
177
+ lr_scheduler: cosine
178
+ learning_rate: 1e-5
179
+ weight_decay: 0.05
180
+
181
+ train_on_inputs: false
182
+ group_by_length: false
183
+ bf16: auto
184
+ fp16:
185
+ tf32: true
186
+
187
+ gradient_checkpointing: unsloth
188
+ early_stopping_patience:
189
+ resume_from_checkpoint:
190
+ #auto_resume_from_checkpoints: true
191
+ local_rank:
192
+ logging_steps: 1
193
+ xformers_attention:
194
+ flash_attention: true
195
+
196
+ warmup_steps: 15
197
+ #evals_per_epoch: 4
198
+ eval_table_size:
199
+ #eval_max_new_tokens: 128
200
+ saves_per_epoch: 1
201
+
202
+ debug:
203
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
204
+ fsdp:
205
+ fsdp_config:
206
+
207
+ special_tokens:
208
+ pad_token: <|finetune_right_pad_id|>
209
+ eos_token: <|eot_id|>
210
+
211
+ ```
212
+
213
+ </details><br>
214
+
215
+ ## Credits
216
+
217
+ Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [Intervitens](https://huggingface.co/intervitens), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org) (But not Alpin.)
218
+
219
+
220
+ ## Training
221
+ The training was done for 2 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the full-parameter fine-tuning of the model.
222
+
223
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
224
+
225
+ ## Safety
226
+
227
+ Nein.