Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
|
|
22 |
quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
|
23 |
and there is also a pure f16 and a pure q8 in every directory.
|
24 |
|
|
|
|
|
25 |
* [L3-8B-Celeste-v1-GGUF](https://huggingface.co/ZeroWw/L3-8B-Celeste-v1-GGUF)
|
26 |
* [ZeroWw/Gemmasutra-9B-v1b-GGUF](https://huggingface.co/ZeroWw/Gemmasutra-9B-v1b-GGUF)
|
27 |
* [ZeroWw/ghost-7b-alpha-GGUF](https://huggingface.co/ZeroWw/ghost-7b-alpha-GGUF)
|
|
|
22 |
quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
|
23 |
and there is also a pure f16 and a pure q8 in every directory.
|
24 |
|
25 |
+
* [ZeroWw/llama3-turbcat-instruct-8b-GGUF](https://huggingface.co/ZeroWw/llama3-turbcat-instruct-8b-GGUF)
|
26 |
+
* [ZeroWw/L3-SthenoMaid-8B-V1-GGUF](https://huggingface.co/ZeroWw/L3-SthenoMaid-8B-V1-GGUF)
|
27 |
* [L3-8B-Celeste-v1-GGUF](https://huggingface.co/ZeroWw/L3-8B-Celeste-v1-GGUF)
|
28 |
* [ZeroWw/Gemmasutra-9B-v1b-GGUF](https://huggingface.co/ZeroWw/Gemmasutra-9B-v1b-GGUF)
|
29 |
* [ZeroWw/ghost-7b-alpha-GGUF](https://huggingface.co/ZeroWw/ghost-7b-alpha-GGUF)
|