munish0838 commited on
Commit
7e54185
1 Parent(s): 842a2c3

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - Qwen/Qwen2.5-Coder-7B
6
+ - huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
7
+ - Etherll/Qwen2.5-Coder-7B-Instruct-Ties
8
+ - MadeAgents/Hammer2.0-7b
9
+ library_name: transformers
10
+ tags:
11
+ - mergekit
12
+ - merge
13
+
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.0-GGUF
21
+ This is quantized version of [BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.0](https://huggingface.co/BenevolenceMessiah/Qwen2.5-Coder-7B-3x-Instruct-TIES-v1.0) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+ # merge
26
+
27
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
28
+
29
+ ## Merge Details
30
+ ### Merge Method
31
+
32
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B) as a base.
33
+
34
+ ### Models Merged
35
+
36
+ The following models were included in the merge:
37
+ * [huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated)
38
+ * [Etherll/Qwen2.5-Coder-7B-Instruct-Ties](https://huggingface.co/Etherll/Qwen2.5-Coder-7B-Instruct-Ties)
39
+ * [MadeAgents/Hammer2.0-7b](https://huggingface.co/MadeAgents/Hammer2.0-7b)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated
48
+ architecture: qwen2
49
+ parameters:
50
+ density: 1.0
51
+ weight: 1.0
52
+ - model: MadeAgents/Hammer2.0-7b
53
+ architecture: qwen2
54
+ parameters:
55
+ density: 1.0
56
+ weight: 1.0
57
+ - model: Etherll/Qwen2.5-Coder-7B-Instruct-Ties
58
+ architecture: qwen2
59
+ parameters:
60
+ density: 1.0
61
+ weight: 1.0
62
+
63
+ merge_method: ties
64
+ base_model: Qwen/Qwen2.5-Coder-7B
65
+ parameters:
66
+ normalize: true
67
+ int8_mask: false
68
+ dtype: bfloat16
69
+ tokenizer_source: union
70
+ ```
71
+