Lewdiculous commited on
Commit
0914976
1 Parent(s): 2d63d87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md CHANGED
@@ -1,3 +1,93 @@
1
  ---
 
2
  license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
  license: cc-by-4.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - gguf
8
+ - quantized
9
+ - roleplay
10
+ - imatrix
11
+ - mistral
12
+ - merge
13
+ inference: false
14
+ # base_model:
15
+ # - Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
16
+ # - Epiculous/Mika-7B
17
  ---
18
+
19
+ This repository hosts GGUF-IQ-Imatrix quantizations for **[grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)**.
20
+
21
+ **What does "Imatrix" mean?**
22
+
23
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
24
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
25
+ The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
26
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
27
+
28
+ For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
29
+
30
+ **Steps:**
31
+ ```
32
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
33
+ ```
34
+ **Quants:**
35
+ ```python
36
+ quantization_options = [
37
+ "Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
38
+ "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
39
+ ]
40
+ ```
41
+
42
+ If you want anything that's not here or another model, feel free to request.
43
+
44
+ **My waifu image for this card:**
45
+
46
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/EO8El-PYPDhqd8LLhGgT1.jpeg)
47
+
48
+ **Original model information:**
49
+
50
+ # kukulemon-7B
51
+
52
+ A merger of two similar models with strong reasoning, hopefully resulting in "dense" encoding of said reasoning, was merged with a model targeting roleplay.
53
+
54
+ I've tested with ChatML prompts with temperature=1.1 and minP=0.03. The model itself supports Alpaca format prompts. The model claims a context length of 32K, but I've only tested to 8K to date.
55
+
56
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
57
+
58
+ ## Merge Details
59
+ ### Merge Method
60
+
61
+ This model was merged using the SLERP merge method.
62
+
63
+ ### Models Merged
64
+
65
+ The following models were included in the merge:
66
+ * [grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B](https://huggingface.co/grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B)
67
+ * [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
68
+
69
+ ### Configuration
70
+
71
+ The following YAML configuration was used to produce this model:
72
+
73
+ ```yaml
74
+ slices:
75
+ - sources:
76
+ - model: grimjim/kuno-kunoichi-v1-DPO-v2-SLERP-7B
77
+ layer_range: [0, 32]
78
+ - model: KatyTheCutie/LemonadeRP-4.5.3
79
+ layer_range: [0, 32]
80
+ # or, the equivalent models: syntax:
81
+ # models:
82
+ merge_method: slerp
83
+ base_model: KatyTheCutie/LemonadeRP-4.5.3
84
+ parameters:
85
+ t:
86
+ - filter: self_attn
87
+ value: [0, 0.5, 0.3, 0.7, 1]
88
+ - filter: mlp
89
+ value: [1, 0.5, 0.7, 0.3, 0]
90
+ - value: 0.5 # fallback for rest of tensors
91
+ dtype: float16
92
+
93
+ ```