|
--- |
|
license: gpl |
|
datasets: |
|
- nomic-ai/gpt4all-j-prompt-generations |
|
language: |
|
- en |
|
inference: false |
|
--- |
|
# GPT4All-13B-snoozy-GPTQ |
|
|
|
This repo contains 4bit GPTQ format quantised models of [Nomic.AI's GPT4all-13B-snoozy](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy). |
|
|
|
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). |
|
|
|
## Repositories available |
|
|
|
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPT4ALL-13B-snoozy-GPTQ). |
|
* [Nomic.AI's original model in float32 HF for GPU inference](https://huggingface.co/nomic-ai/gpt4all-13b-snoozy). |
|
|
|
## How to easily download and use this model in text-generation-webui |
|
|
|
Open the text-generation-webui UI as normal. |
|
|
|
1. Click the **Model tab**. |
|
2. Under **Download custom model or LoRA**, enter `TheBloke/GPT4All-13B-snoozy-GPTQ`. |
|
3. Click **Download**. |
|
4. Wait until it says it's finished downloading. |
|
5. Click the **Refresh** icon next to **Model** in the top left. |
|
6. In the **Model drop-down**: choose the model you just downloaded, `GPT4All-13B-snoozy-GPTQ`. |
|
7. If you see an error in the bottom right, ignore it - it's temporary. |
|
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` |
|
9. Click **Save settings for this model** in the top right. |
|
10. Click **Reload the Model** in the top right. |
|
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! |
|
|
|
## Provided files |
|
|
|
**Compatible file - GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors** |
|
|
|
In the `main` branch - the default one - you will find `GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors` |
|
|
|
This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility |
|
|
|
It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. |
|
|
|
* `GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors` |
|
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches |
|
* Works with text-generation-webui one-click-installers |
|
* Parameters: Groupsize = 128g. No act-order. |
|
* Command used to create the GPTQ: |
|
``` |
|
CUDA_VISIBLE_DEVICES=0 python3 llama.py GPT4All-13B-snoozy c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors GPT4ALL-13B-GPTQ-4bit-128g.compat.no-act-order.safetensors |
|
``` |
|
|
|
|
|
# Original Model Card for GPT4All-13b-snoozy |
|
|
|
An Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
This model has been finetuned from LLama 13B |
|
|
|
- **Developed by:** [Nomic AI](https://home.nomic.ai) |
|
- **Model Type:** A finetuned LLama 13B model on assistant style interaction data |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache-2 |
|
- **Finetuned from model [optional]:** LLama 13B |
|
|
|
This model was trained on `nomic-ai/gpt4all-j-prompt-generations` using `revision=v1.3-groovy` |
|
|
|
### Model Sources [optional] |
|
|
|
<!-- Provide the basic links for the model. --> |
|
|
|
- **Repository:** [https://github.com/nomic-ai/gpt4all](https://github.com/nomic-ai/gpt4all) |
|
- **Base Model Repository:** [https://github.com/facebookresearch/llama](https://github.com/facebookresearch/llama) |
|
- **Demo [optional]:** [https://gpt4all.io/](https://gpt4all.io/) |
|
|
|
|
|
### Results |
|
|
|
Results on common sense reasoning benchmarks |
|
|
|
``` |
|
Model BoolQ PIQA HellaSwag WinoGrande ARC-e ARC-c OBQA |
|
----------------------- ---------- ---------- ----------- ------------ ---------- ---------- ---------- |
|
GPT4All-J 6B v1.0 73.4 74.8 63.4 64.7 54.9 36.0 40.2 |
|
GPT4All-J v1.1-breezy 74.0 75.1 63.2 63.6 55.4 34.9 38.4 |
|
GPT4All-J v1.2-jazzy 74.8 74.9 63.6 63.8 56.6 35.3 41.0 |
|
GPT4All-J v1.3-groovy 73.6 74.3 63.8 63.5 57.7 35.0 38.8 |
|
GPT4All-J Lora 6B 68.6 75.8 66.2 63.5 56.4 35.7 40.2 |
|
GPT4All LLaMa Lora 7B 73.1 77.6 72.1 67.8 51.1 40.4 40.2 |
|
GPT4All 13B snoozy *83.3* 79.2 75.0 *71.3* 60.9 44.2 43.4 |
|
Dolly 6B 68.8 77.3 67.6 63.9 62.9 38.7 41.2 |
|
Dolly 12B 56.7 75.4 71.0 62.2 *64.6* 38.5 40.4 |
|
Alpaca 7B 73.9 77.2 73.9 66.1 59.8 43.3 43.4 |
|
Alpaca Lora 7B 74.3 *79.3* 74.0 68.8 56.6 43.9 42.6 |
|
GPT-J 6B 65.4 76.2 66.2 64.1 62.2 36.6 38.2 |
|
LLama 7B 73.1 77.4 73.0 66.9 52.5 41.4 42.4 |
|
LLama 13B 68.5 79.1 *76.2* 70.1 60.0 *44.6* 42.2 |
|
Pythia 6.9B 63.5 76.3 64.0 61.1 61.3 35.2 37.2 |
|
Pythia 12B 67.7 76.6 67.3 63.8 63.9 34.8 38.0 |
|
Vicuña T5 81.5 64.6 46.3 61.8 49.3 33.3 39.4 |
|
Vicuña 13B 81.5 76.8 73.3 66.7 57.4 42.7 43.6 |
|
Stable Vicuña RLHF 82.3 78.6 74.1 70.9 61.0 43.5 *44.4* |
|
StableLM Tuned 62.5 71.2 53.6 54.8 52.4 31.1 33.4 |
|
StableLM Base 60.1 67.4 41.2 50.1 44.9 27.0 32.0 |
|
``` |
|
|