Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,7 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
|
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
and there is also a pure f16 in every directory.
|
22 |
|
|
|
23 |
* [ZeroWw/Phi-3-mini-4k-geminified-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-4k-geminified-GGUF)
|
24 |
* [ZeroWw/CodeQwen1.5-7B-Chat-GGUF](https://huggingface.co/ZeroWw/CodeQwen1.5-7B-Chat-GGUF)
|
25 |
* [ZeroWw/NeuralPipe-7B-slerp-GGUF](https://huggingface.co/ZeroWw/NeuralPipe-7B-slerp-GGUF)
|
|
|
20 |
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
|
21 |
and there is also a pure f16 in every directory.
|
22 |
|
23 |
+
* [ZeroWw/Gemma-2-9B-It-SPPO-Iter3-GGUF](https://huggingface.co/ZeroWw/Gemma-2-9B-It-SPPO-Iter3-GGUF)
|
24 |
* [ZeroWw/Phi-3-mini-4k-geminified-GGUF](https://huggingface.co/ZeroWw/Phi-3-mini-4k-geminified-GGUF)
|
25 |
* [ZeroWw/CodeQwen1.5-7B-Chat-GGUF](https://huggingface.co/ZeroWw/CodeQwen1.5-7B-Chat-GGUF)
|
26 |
* [ZeroWw/NeuralPipe-7B-slerp-GGUF](https://huggingface.co/ZeroWw/NeuralPipe-7B-slerp-GGUF)
|