Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,7 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
|
|
25 |
quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
|
26 |
```
|
27 |
|
|
|
28 |
* [ZeroWw/Meta-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3.1-8B-Instruct-GGUF)
|
29 |
* [ZeroWw/ghost-8b-beta-GGUF](https://huggingface.co/ZeroWw/ghost-8b-beta-GGUF)
|
30 |
* [ZeroWw/Mistral-Nemo-Base-2407-GGUF](https://huggingface.co/ZeroWw/Mistral-Nemo-Base-2407-GGUF)
|
|
|
25 |
quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
|
26 |
```
|
27 |
|
28 |
+
* [ZeroWw/Symbol-LLM-8B-Instruct-GGUF](https://huggingface.co/ZeroWwZeroWw/Symbol-LLM-8B-Instruct-GGUF)
|
29 |
* [ZeroWw/Meta-Llama-3.1-8B-Instruct-GGUF](https://huggingface.co/ZeroWw/Meta-Llama-3.1-8B-Instruct-GGUF)
|
30 |
* [ZeroWw/ghost-8b-beta-GGUF](https://huggingface.co/ZeroWw/ghost-8b-beta-GGUF)
|
31 |
* [ZeroWw/Mistral-Nemo-Base-2407-GGUF](https://huggingface.co/ZeroWw/Mistral-Nemo-Base-2407-GGUF)
|