Here is a 4 bit GPTQ quantized version

#5
by chplushsieh - opened

https://huggingface.co/chplushsieh/Meta-Llama-3-8B-Instruct-abliterated-v3-GPTQ-4bit
for people who want to use it with GPTQ and a 8GB VRAM GPU.

Sign up or log in to comment