ZeroWw commited on
Commit
a76b1ec
1 Parent(s): f2a9702

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -22,6 +22,8 @@ quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type
22
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
23
  and there is also a pure f16 and a pure q8 in every directory.
24
 
 
 
25
  * [ZeroWw/Mistral-7B-Instruct-v0.3-GGUF](https://huggingface.co/ZeroWw/Mistral-7B-Instruct-v0.3-GGUF)
26
  * [ZeroWw/L3-8b-Rosier-v1-GGUF](https://huggingface.co/ZeroWw/L3-8b-Rosier-v1-GGUF)
27
  * [ZeroWw/llama3-turbcat-instruct-8b-GGUF](https://huggingface.co/ZeroWw/llama3-turbcat-instruct-8b-GGUF)
 
22
  quantize.exe --allow-requantize --pure model.f16.gguf model.f16.q8_p.gguf q8_0
23
  and there is also a pure f16 and a pure q8 in every directory.
24
 
25
+ * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-7b-fc-r-GGUF)
26
+ * [ZeroWw/xLAM-1b-fc-r-GGUF](https://huggingface.co/ZeroWw/xLAM-1b-fc-r-GGUF)
27
  * [ZeroWw/Mistral-7B-Instruct-v0.3-GGUF](https://huggingface.co/ZeroWw/Mistral-7B-Instruct-v0.3-GGUF)
28
  * [ZeroWw/L3-8b-Rosier-v1-GGUF](https://huggingface.co/ZeroWw/L3-8b-Rosier-v1-GGUF)
29
  * [ZeroWw/llama3-turbcat-instruct-8b-GGUF](https://huggingface.co/ZeroWw/llama3-turbcat-instruct-8b-GGUF)