Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,10 @@ ALL the models were quantized in this way:
|
|
18 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q5.gguf q5_k
|
19 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
-
|
|
|
22 |
|
|
|
23 |
* [ZeroWw/Phi-3-song-lyrics-1.0-GGUF](https://huggingface.co/ZeroWw/Phi-3-song-lyrics-1.0-GGUF)
|
24 |
* [ZeroWw/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-GGUF)
|
25 |
* [ZeroWw/LLaMAX3-8B-Alpaca-GGUF](https://huggingface.co/ZeroWw/LLaMAX3-8B-Alpaca-GGUF)
|
|
|
18 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q5.gguf q5_k
|
19 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
+
quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
|
22 |
+
and there is also a pure f16 and a pure q8 in every directory.
|
23 |
|
24 |
+
* [ZeroWw/Phi-3-mini-128k-instruct-abliterated-v3-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-128k-instruct-abliterated-v3-GGUF)
|
25 |
* [ZeroWw/Phi-3-song-lyrics-1.0-GGUF](https://huggingface.co/ZeroWw/Phi-3-song-lyrics-1.0-GGUF)
|
26 |
* [ZeroWw/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3-8B-Instruct-GGUF)
|
27 |
* [ZeroWw/LLaMAX3-8B-Alpaca-GGUF](https://huggingface.co/ZeroWw/LLaMAX3-8B-Alpaca-GGUF)
|