ZeroWw commited on
Commit
172a314
1 Parent(s): d59156d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -27,6 +27,11 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
27
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
28
  ```
29
 
 
 
 
 
 
30
  * [ZeroWw/Llama3.1-8B-Enigma-GGUF](https://huggingface.co/ZeroWw/Llama3.1-8B-Enigma-GGUF)
31
  * [ZeroWw/Llama3.1-8B-ShiningValiant2-GGUF](https://huggingface.co/ZeroWw/Llama3.1-8B-ShiningValiant2-GGUF)
32
  * [ZeroWw/neural-chat-7b-v3-3-GGUF](https://huggingface.co/ZeroWw/neural-chat-7b-v3-3-GGUF)
 
27
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
28
  ```
29
 
30
+ * [ZeroWw/Phi-3.5-mini-instruct_Uncensored-GGUF](https://huggingface.co/ZeroWw/Phi-3.5-mini-instruct_Uncensored-GGUF)
31
+ * [ZeroWw/Phi-3.5-mini-instruct-GGUF](https://huggingface.co/ZeroWw/Phi-3.5-mini-instruct-GGUF)
32
+ * [ZeroWw/ghost-8b-beta-1608-GGUF](https://huggingface.co/ZeroWw/ghost-8b-beta-1608-GGUF)
33
+ * [ZeroWw/Llama-3.1-Storm-8B-GGUF](https://huggingface.co/ZeroWw/Llama-3.1-Storm-8B-GGUF)
34
+ * [ZeroWw/Llama-3.1-Minitron-4B-Width-Base-GGUF](https://huggingface.co/ZeroWw/Llama-3.1-Minitron-4B-Width-Base-GGUF)
35
  * [ZeroWw/Llama3.1-8B-Enigma-GGUF](https://huggingface.co/ZeroWw/Llama3.1-8B-Enigma-GGUF)
36
  * [ZeroWw/Llama3.1-8B-ShiningValiant2-GGUF](https://huggingface.co/ZeroWw/Llama3.1-8B-ShiningValiant2-GGUF)
37
  * [ZeroWw/neural-chat-7b-v3-3-GGUF](https://huggingface.co/ZeroWw/neural-chat-7b-v3-3-GGUF)