ZeroWw commited on
Commit
ede6619
1 Parent(s): 8bfedc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -1,11 +1,11 @@
1
- ---
2
- title: README
3
- emoji: 🔥
4
- colorFrom: purple
5
- colorTo: purple
6
- sdk: static
7
- pinned: true
8
- ---
9
 
10
  These are my own quantizations (updated almost daily).
11
  The difference with normal quantizations is that I quantize the output and embed tensors to f16.
@@ -14,7 +14,8 @@ This creates models that are little or not degraded at all and have a smaller si
14
  They run at about 3-6 t/sec on CPU only using llama.cpp
15
  And obviously faster on computers with potent GPUs
16
 
17
-
 
18
  * [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
19
  * [ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF](https://huggingface.co/ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF)
20
  * [ZeroWw/Yi-1.5-9B-32K-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-9B-32K-GGUF)
 
1
+ ---
2
+ title: README
3
+ emoji: 🔥
4
+ colorFrom: purple
5
+ colorTo: purple
6
+ sdk: static
7
+ pinned: true
8
+ ---
9
 
10
  These are my own quantizations (updated almost daily).
11
  The difference with normal quantizations is that I quantize the output and embed tensors to f16.
 
14
  They run at about 3-6 t/sec on CPU only using llama.cpp
15
  And obviously faster on computers with potent GPUs
16
 
17
+ * [ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF](https://huggingface.co/ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF)
18
+ * [ZeroWw/Pythia-Chat-Base-7B-GGUF](https://huggingface.co/ZeroWw/Pythia-Chat-Base-7B-GGUF)
19
  * [ZeroWw/Yi-1.5-6B-Chat-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-6B-Chat-GGUF)
20
  * [ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF](https://huggingface.co/ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF)
21
  * [ZeroWw/Yi-1.5-9B-32K-GGUF](https://huggingface.co/ZeroWw/Yi-1.5-9B-32K-GGUF)