ZeroWw commited on
Commit
8bfedc0
1 Parent(s): 1791e0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -9
README.md CHANGED
@@ -1,10 +1,32 @@
1
- ---
2
- title: README
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: purple
6
- sdk: static
7
- pinned: false
8
- ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: README
3
+ emoji: 🔥
4
+ colorFrom: purple
5
+ colorTo: purple
6
+ sdk: static
7
+ pinned: true
8
+ ---
9
 
10
+ These are my own quantizations (updated almost daily).
11
+ The difference with normal quantizations is that I quantize the output and embed tensors to f16.
12
+ and the other tensors to 15_k,q6_k or q8_0.
13
+ This creates models that are little or not degraded at all and have a smaller size.
14
+ They run at about 3-6 t/sec on CPU only using llama.cpp
15
+ And obviously faster on computers with potent GPUs
16
+
17
+
18
+ * [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
19
+ * [ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF](https://huggingface.co/ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF)
20
+ * [ZeroWw/Yi-1.5-9B-32K-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-9B-32K-GGUF)
21
+ * [ZeroWw/aya-23-8B-GGUF](https://huggingface.co/ZeroWw/aya-23-8B-GGUF)
22
+ * [ZeroWw/MixTAO-7Bx2-MoE-v8.1-GGUF](https://huggingface.co/ZeroWw/MixTAO-7Bx2-MoE-v8.1-GGUF)
23
+ * [ZeroWw/Phi-3-medium-128k-instruct-GGUF](https://huggingface.co/ZeroWw/Phi-3-medium-128k-instruct-GGUF)
24
+ * [ZeroWw/Phi-3-mini-128k-instruct-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-128k-instruct-GGUF)
25
+ * [ZeroWw/Qwen1.5-7B-Chat-GGUF](https://huggingface.co/ZeroWw/Qwen1.5-7B-Chat-GGUF)
26
+ * [ZeroWw/NeuralDaredevil-8B-abliterated-GGUF](https://huggingface.co/ZeroWw/NeuralDaredevil-8B-abliterated-GGUF)
27
+ * [ZeroWw/Mistroll-7B-v2.2-GGUF](https://huggingface.co/ZeroWw/Mistroll-7B-v2.2-GGUF)
28
+ * [ZeroWw/Samantha-Qwen-2-7B-GGUF](https://huggingface.co/ZeroWw/Samantha-Qwen-2-7B-GGUF)
29
+ * [ZeroWw/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-GGUF)
30
+ * [ZeroWw/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF](https://huggingface.co/ZeroWw/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF)
31
+ * [ZeroWw/microsoft_WizardLM-2-7B-GGUF](https://huggingface.co/ZeroWw/microsoft_WizardLM-2-7B-GGUF)
32
+ * [ZeroWw/Mistral-7B-Instruct-v0.3-GGUF](https://huggingface.co/ZeroWw/Mistral-7B-Instruct-v0.3-GGUF)