Spaces:
Running
title: README
emoji: 🔥
colorFrom: purple
colorTo: purple
sdk: static
pinned: true
These are my own quantizations (updated almost daily).
The difference with normal quantizations is that I quantize the output and embed tensors to f16.
and the other tensors to 15_k,q6_k or q8_0.
This creates models that are little or not degraded at all and have a smaller size.
They run at about 3-6 t/sec on CPU only using llama.cpp
And obviously faster on computers with potent GPUs
ALL the models were quantized in this way:
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q5.gguf q5_k
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q6_k
quantize.exe --allow-requantize --output-tensor-type f16 --token-embedding-type f16 model.f16.gguf model.f16.q6.gguf q8_0
and there is also a pure f16 in every directory.
- ZeroWw/Llama-3-8B-Instruct-Gradient-4194k-GGUF
- ZeroWw/gemma-2-9b-it-GGUF
- ZeroWw/llama3-8B-DarkIdol-2.1-Uncensored-32K-GGUF
- ZeroWw/Meta-Llama-3-8B-Instruct-abliterated-v3-GGUF
- ZeroWw/Hathor_Stable-v0.2-L3-8B-GGUF
- ZeroWw/L3-Aethora-15B-V2-GGUF
- ZeroWw/L3-8B-Stheno-v3.3-32K-GGUF
- ZeroWw/Llama-3-8B-Instruct-Gradient-1048k-GGUF
- ZeroWw/Pythia-Chat-Base-7B-GGUF
- ZeroWw/Yi-1.5-6B-Chat-GGUF
- ZeroWw/DeepSeek-Coder-V2-Lite-Base-GGUF
- ZeroWw/Yi-1.5-9B-32K-GGUF
- ZeroWw/aya-23-8B-GGUF
- ZeroWw/MixTAO-7Bx2-MoE-v8.1-GGUF
- ZeroWw/Phi-3-medium-128k-instruct-GGUF
- ZeroWw/Phi-3-mini-128k-instruct-GGUF
- ZeroWw/Qwen1.5-7B-Chat-GGUF
- ZeroWw/NeuralDaredevil-8B-abliterated-GGUF
- ZeroWw/Mistroll-7B-v2.2-GGUF
- ZeroWw/Samantha-Qwen-2-7B-GGUF
- ZeroWw/Meta-Llama-3-8B-Instruct-GGUF
- ZeroWw/NSFW_DPO_Noromaid-7b-Mistral-7B-Instruct-v0.1-GGUF
- ZeroWw/microsoft_WizardLM-2-7B-GGUF
- ZeroWw/Mistral-7B-Instruct-v0.3-GGUF