guanaco-7B-GPTQ / README.md
TheBloke's picture
Update README.md
57bd95c
|
raw
history blame
2.53 kB
---
inference: false
license: other
---
# Tim Dettmers' Guanaco 7B GPTQ
These files are GPTQ 4bit model files for [Tim Dettmers' Guanaco 7B](https://huggingface.co/timdettmers/guanaco-7b).
It is the result of merging the LoRA then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-7B-GGML)
* [Original unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-7B-HF)
## How to easily download and use this model in text-generation-webui
Open the text-generation-webui UI as normal.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/guanaco-7B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `guanaco-7B-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**Compatible file - Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors**
In the `main` branch you will find `Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors`
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.
It was created without groupsize to minimise VRAM requirements, to keep it under 24GB VRAM. It was created with the `--act-order` parameter to maximise accuracy.
* `Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = 128. No act-order.
* Command used to create the GPTQ:
```
python llama.py /workspace/process/TheBloke_guanaco-7B-GGML/HF wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/process/TheBloke_guanaco-7B-GGML/gptq/Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors
```
# Original model card
Not provided by original model creator.