Triangle104 commited on
Commit
3a1f770
1 Parent(s): 96da065

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md CHANGED
@@ -12,6 +12,62 @@ base_model: Hastagaras/Llama-3.1-Jamet-8B-MK.I
12
  This model was converted to GGUF format from [`Hastagaras/Llama-3.1-Jamet-8B-MK.I`](https://huggingface.co/Hastagaras/Llama-3.1-Jamet-8B-MK.I) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Hastagaras/Llama-3.1-Jamet-8B-MK.I) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`Hastagaras/Llama-3.1-Jamet-8B-MK.I`](https://huggingface.co/Hastagaras/Llama-3.1-Jamet-8B-MK.I) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Hastagaras/Llama-3.1-Jamet-8B-MK.I) for more details on the model.
14
 
15
+ ---
16
+ Model details:
17
+ -
18
+ System:
19
+
20
+ ### Roleplay Instructions
21
+
22
+ - Be {{char}}, naturally and consistently
23
+ - React realistically to {{user}}, never control their actions
24
+ - Stay in character at all times
25
+
26
+ or something similar, just make sure to add: ### Roleplay Instructions
27
+
28
+ this model is uncensored, maybe too much... in RP scenario (for me)
29
+
30
+ dataset:
31
+
32
+ C2logs that I cleaned a long time ago
33
+ Freedom RP, but it seems it’s already removed from HF
34
+ Stories from Reddit
35
+ Gemma data from: argilla-warehouse/magpie-ultra-v1.0-gemma, just a small subset
36
+ Reflection data, from here: PJMixers-Dev/Weyaxi_HelpSteer-filtered-Reflection-Gemini-1.5-Flash-ShareGPT. It’s generated by Gemini, and I was like, “Oh, I can make a Google-themed model with this and Gemma data.”
37
+ Toxic data: NobodyExistsOnTheInternet/ToxicQAFinal to make it toxic
38
+ And lastly, just my dump—RP, general, etc., with some of it also generated by Gemini.
39
+
40
+ so yeah, most of the data is from Google, and only the RP data is from Claude.
41
+
42
+ you can expect some differences in terms of style (a lot of markdown), but don’t expect this model to be as smart as the instruct
43
+
44
+ Feedback is greatly appreciated for future improvements (hopefully)
45
+
46
+ Technical Details:
47
+
48
+ Base model
49
+ v
50
+ finetuned the lm_head, embed_tokens and first layer (0)
51
+ v
52
+ finetune it again, layer 1-2
53
+ v
54
+ again, but this time using Lora, 64 rank
55
+ v
56
+ then merge the lora
57
+ ---
58
+ the abliterated instruct
59
+ v
60
+ same, finetuned the lm_head, embed_tokens and first layer (0)
61
+ v
62
+ still the same, finetune it again, layer 1-2
63
+ v
64
+ finetune middle layers
65
+ v
66
+ merged the previous Lora with this finetuned abliterated model
67
+ ---
68
+ finnaly, merge the two model using ties
69
+
70
+ ---
71
  ## Use with llama.cpp
72
  Install llama.cpp through brew (works on Mac and Linux)
73