Henk717 commited on
Commit
8957a0a
1 Parent(s): 344392c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -13
README.md CHANGED
@@ -11,6 +11,19 @@ tags:
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
  ## Merge Details
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ### Merge Method
15
 
16
  This model was merged using the passthrough merge method.
@@ -26,19 +39,103 @@ The following models were included in the merge:
26
  The following YAML configuration was used to produce this model:
27
 
28
  ```yaml
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  dtype: float16
30
- merge_method: passthrough
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
  slices:
32
- - sources:
33
- - layer_range: [0, 16]
34
- model: output/Estopia_Eru
35
- - sources:
36
- - layer_range: [8, 24]
37
- model: PygmalionAI/pygmalion-2-13b
38
- - sources:
39
- - layer_range: [17, 32]
40
- model: output/Estopia_Eru
41
- - sources:
42
- - layer_range: [25, 40]
43
- model: PygmalionAI/pygmalion-2-13b
 
 
 
44
  ```
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
 
13
  ## Merge Details
14
+
15
+ ### Author's Comments
16
+ How does Shotmisser64 use Daisuke.
17
+
18
+ Use alpaca, pyg instruct is not recommended. I use (https://files.catbox.moe/61zzg9.json) along with minimalist context template.
19
+ Use novel style, mainly because of Erebus. Markdown untested.
20
+ Use this as an RP assistant. Thanks to pyg, it can also act as 'human' if you want to play as their narrator. You have to edit the response if things go sour at the start.
21
+ Like Kayra, your config matters a lot, but after 3 turns, it should get better. Feel free to even disable the instruct.
22
+ Because of Pyg, I recommend putting -3 bias on [376] token (" token) in Kcpp/ST logit bias. It is to reduce pyg's dialogue spam. Since there's another form of (" token), set bias to -100 or ban the token [1346]. As for the [525] (') token, I set it to -5 to reduce 'thinking' done by the character while not banning it outright for words that are using it.
23
+ I set my Temperature at 1.2 and min p at 0.1. Feel free to play around with other sampler as you see fit.
24
+
25
+ Special thanks to PygmalionAI for Pygmalion, Jaxxks for Estopia, and Seeker for Erebus, along with the other model creator's model that's being used in the merge.
26
+
27
  ### Merge Method
28
 
29
  This model was merged using the passthrough merge method.
 
39
  The following YAML configuration was used to produce this model:
40
 
41
  ```yaml
42
+ merge_method: task_arithmetic
43
+ base_model: TheBloke/Llama-2-13B-fp16
44
+ models:
45
+ - model: TheBloke/Llama-2-13B-fp16
46
+ - model: Undi95/UtopiaXL-13B
47
+ parameters:
48
+ weight: 1.0
49
+ - model: Doctor-Shotgun/cat-v1.0-13b
50
+ parameters:
51
+ weight: 0.02
52
+ - model: PygmalionAI/mythalion-13b
53
+ parameters:
54
+ weight: 0.10
55
+ - model: Undi95/Emerhyst-13B
56
+ parameters:
57
+ weight: 0.05
58
+ - model: CalderaAI/13B-Thorns-l2
59
+ parameters:
60
+ weight: 0.05
61
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
62
+ parameters:
63
+ weight: 0.20
64
  dtype: float16
65
+ name: EstopiaV9 # 1rstSamurai
66
+ ---
67
+ merge_method: task_arithmetic
68
+ base_model: TheBloke/Llama-2-13B-fp16
69
+ models:
70
+ - model: TheBloke/Llama-2-13B-fp16
71
+ - model: Undi95/UtopiaXL-13B
72
+ parameters:
73
+ weight: 1.0
74
+ - model: Doctor-Shotgun/cat-v1.0-13b
75
+ parameters:
76
+ weight: 0.01
77
+ - model: chargoddard/rpguild-chatml-13b
78
+ parameters:
79
+ weight: 0.02
80
+ - model: PygmalionAI/mythalion-13b
81
+ parameters:
82
+ weight: 0.08
83
+ - model: CalderaAI/13B-Thorns-l2
84
+ parameters:
85
+ weight: 0.02
86
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
87
+ parameters:
88
+ weight: 0.20
89
+ dtype: float16
90
+ name: EstopiaV13 # No13
91
+ ---
92
+ models:
93
+ - model: output/EstopiaV9
94
+ parameters:
95
+ weight: 1
96
+ density: 1
97
+ - model: output/EstopiaV13
98
+ parameters:
99
+ weight: 0.05
100
+ density: 0.30
101
+ merge_method: dare_ties
102
+ base_model: TheBloke/Llama-2-13B-fp16
103
+ parameters:
104
+ int8_mask: true
105
+ dtype: bfloat16
106
+ name: Estopia_Dare # rainbowrainbow
107
+ ---
108
+ models:
109
+ - model: output/Estopia_Dare
110
+ parameters:
111
+ weight: 1
112
+ density: 1
113
+ - model: /home/mixer/koboldai/models/llama2-13b-erebus-v3
114
+ parameters:
115
+ weight: 0.2
116
+ density: 0.1
117
+ merge_method: dare_ties
118
+ base_model: TheBloke/Llama-2-13B-fp16
119
+ parameters:
120
+ int8_mask: true
121
+ dtype: bfloat16
122
+ name: Estopia_Eru # A-JAX
123
+ ---
124
+ # From the top to Estopia_Eru is by Jaxxks, I just stack Estopia_Eru with Pyg2 to give it Pyg's dialogue capability.
125
  slices:
126
+ - sources:
127
+ - model: output/Estopia_Eru
128
+ layer_range: [0, 16]
129
+ - sources:
130
+ - model: PygmalionAI/pygmalion-2-13b
131
+ layer_range: [8, 24]
132
+ - sources:
133
+ - model: output/Estopia_Eru
134
+ layer_range: [17, 32]
135
+ - sources:
136
+ - model: PygmalionAI/pygmalion-2-13b
137
+ layer_range: [25, 40]
138
+ merge_method: passthrough
139
+ dtype: float16
140
+
141
  ```