Upload new GPTQs with varied parameters
Browse files
README.md
CHANGED
@@ -1,146 +1,172 @@
|
|
1 |
---
|
2 |
-
license: other
|
3 |
-
library_name: transformers
|
4 |
-
pipeline_tag: text-generation
|
5 |
-
datasets:
|
6 |
-
- RyokoAI/ShareGPT52K
|
7 |
-
- Hello-SimpleAI/HC3
|
8 |
-
tags:
|
9 |
-
- koala
|
10 |
-
- ShareGPT
|
11 |
-
- llama
|
12 |
-
- gptq
|
13 |
inference: false
|
|
|
|
|
14 |
---
|
|
|
15 |
<!-- header start -->
|
16 |
<div style="width: 100%;">
|
17 |
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
18 |
</div>
|
19 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
21 |
-
<p><a href="https://discord.gg/
|
22 |
</div>
|
23 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
24 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
25 |
</div>
|
26 |
</div>
|
27 |
<!-- header end -->
|
28 |
-
# Koala: A Dialogue Model for Academic Research
|
29 |
-
This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model.
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
34 |
-
I have the following Koala model repositories available:
|
35 |
|
36 |
-
|
37 |
-
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
|
38 |
-
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
|
39 |
-
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GGML)
|
40 |
|
41 |
-
|
42 |
-
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
|
43 |
-
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
|
44 |
-
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
45 |
-
* [4-bit, 5-bit and 8-bit GGML models for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GGML)
|
46 |
|
47 |
-
##
|
48 |
|
49 |
-
|
|
|
|
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
-
|
55 |
-
|
|
|
|
|
56 |
|
57 |
## Provided files
|
58 |
|
59 |
-
|
60 |
-
|
61 |
-
Details of the files provided:
|
62 |
-
* `koala-7B-4bit-128g.safetensors`
|
63 |
-
* newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
|
64 |
-
* Command to create:
|
65 |
-
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-7B-4bit-128g.safetensors`
|
66 |
-
* `koala-7B-4bit-128g.no-act-order.ooba.pt`
|
67 |
-
* `pt` format file, created with [oobabooga's older CUDA fork of GPTQ-for-LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa).
|
68 |
-
* This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
|
69 |
-
* It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
|
70 |
-
* The older GPTQ code does not support all the latest features, so the quality may be fractionally lower.
|
71 |
-
* Command to create:
|
72 |
-
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-7B-4bit-128g.no-act-order.ooba.pt`
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
|
|
81 |
```
|
82 |
-
git clone https://
|
83 |
-
git clone https://github.com/oobabooga/text-generation-webui
|
84 |
-
mkdir -p text-generation-webui/repositories
|
85 |
-
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
|
86 |
```
|
|
|
87 |
|
88 |
-
|
89 |
-
```
|
90 |
-
cd text-generation-webui
|
91 |
-
python server.py --model koala-7B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
|
92 |
-
```
|
93 |
|
94 |
-
|
95 |
|
96 |
-
|
97 |
-
```
|
98 |
-
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
|
99 |
-
cd GPTQ-for-LLaMa
|
100 |
-
python setup_cuda.py install
|
101 |
-
```
|
102 |
-
Then link that into `text-generation-webui/repositories` as described above.
|
103 |
|
104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
-
## How
|
107 |
|
108 |
-
|
109 |
-
|
110 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
111 |
|
112 |
-
|
|
|
113 |
|
114 |
-
|
115 |
|
116 |
-
|
117 |
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
|
|
|
|
123 |
|
124 |
-
|
125 |
-
|
126 |
-
--load_base_checkpoint='params::/content/llama-7B-LM' \
|
127 |
-
--load_target_checkpoint='params::/content/koala_diffs/koala_7b_diff_v2' \
|
128 |
-
--output_file=/content/koala_7b.diff.weights \
|
129 |
-
--streaming=True
|
130 |
|
131 |
-
|
132 |
-
-
|
133 |
-
|
134 |
-
|
135 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
136 |
```
|
137 |
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
<!-- footer start -->
|
139 |
## Discord
|
140 |
|
141 |
For further support, and discussions on these models and AI in general, join us at:
|
142 |
|
143 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
144 |
|
145 |
## Thanks, and how to contribute.
|
146 |
|
@@ -155,13 +181,19 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
155 |
* Patreon: https://patreon.com/TheBlokeAI
|
156 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
157 |
|
158 |
-
**
|
|
|
|
|
159 |
|
160 |
Thank you to all my generous patrons and donaters!
|
|
|
161 |
<!-- footer end -->
|
162 |
-
## Further info
|
163 |
|
164 |
-
|
|
|
|
|
|
|
|
|
165 |
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
|
166 |
* [Online demo](https://koala.lmsys.org/)
|
167 |
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
|
@@ -173,3 +205,4 @@ The model weights are intended for academic research only, subject to the
|
|
173 |
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
|
174 |
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
|
175 |
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
inference: false
|
3 |
+
license: other
|
4 |
+
model_type: llama
|
5 |
---
|
6 |
+
|
7 |
<!-- header start -->
|
8 |
<div style="width: 100%;">
|
9 |
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
10 |
</div>
|
11 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
12 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
13 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
14 |
</div>
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
16 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
17 |
</div>
|
18 |
</div>
|
19 |
<!-- header end -->
|
|
|
|
|
20 |
|
21 |
+
# Young Geng's Koala 7B GPTQ
|
22 |
|
23 |
+
These files are GPTQ model files for [Young Geng's Koala 7B](https://huggingface.co/young-geng/koala).
|
|
|
24 |
|
25 |
+
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
|
|
|
|
|
|
|
26 |
|
27 |
+
These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
|
|
|
|
|
|
|
|
|
28 |
|
29 |
+
## Repositories available
|
30 |
|
31 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/koala-7B-GPTQ)
|
32 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/koala-7B-GGML)
|
33 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/koala-7B-HF)
|
34 |
|
35 |
+
## Prompt template: Koala
|
36 |
|
37 |
+
```
|
38 |
+
BEGINNING OF CONVERSATION:
|
39 |
+
USER: {prompt}
|
40 |
+
GPT:
|
41 |
+
```
|
42 |
|
43 |
## Provided files
|
44 |
|
45 |
+
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
+
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
|
48 |
|
49 |
+
| Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
|
50 |
+
| ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
|
51 |
+
| main | 4 | 128 | False | 4.00 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
|
52 |
+
| gptq-4bit-32g-actorder_True | 4 | 32 | True | 4.28 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
|
53 |
+
| gptq-4bit-64g-actorder_True | 4 | 64 | True | 4.02 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
54 |
+
| gptq-4bit-128g-actorder_True | 4 | 128 | True | 3.90 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
|
55 |
+
| gptq-8bit--1g-actorder_True | 8 | None | True | 7.01 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
|
56 |
+
| gptq-8bit-128g-actorder_False | 8 | 128 | False | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
|
57 |
+
| gptq-8bit-128g-actorder_True | 8 | 128 | True | 7.16 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
|
58 |
+
| gptq-8bit-64g-actorder_True | 8 | 64 | True | 7.31 GB | False | AutoGPTQ | 8-bit, with group size 64g and Act Order for maximum inference quality. Poor AutoGPTQ CUDA speed. |
|
59 |
|
60 |
+
## How to download from branches
|
61 |
|
62 |
+
- In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/koala-7B-GPTQ:gptq-4bit-32g-actorder_True`
|
63 |
+
- With Git, you can clone a branch with:
|
64 |
```
|
65 |
+
git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/koala-7B-GPTQ`
|
|
|
|
|
|
|
66 |
```
|
67 |
+
- In Python Transformers code, the branch is the `revision` parameter; see below.
|
68 |
|
69 |
+
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
|
|
|
|
|
|
|
|
70 |
|
71 |
+
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
72 |
|
73 |
+
It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
|
75 |
+
1. Click the **Model tab**.
|
76 |
+
2. Under **Download custom model or LoRA**, enter `TheBloke/koala-7B-GPTQ`.
|
77 |
+
- To download from a specific branch, enter for example `TheBloke/koala-7B-GPTQ:gptq-4bit-32g-actorder_True`
|
78 |
+
- see Provided Files above for the list of branches for each option.
|
79 |
+
3. Click **Download**.
|
80 |
+
4. The model will start downloading. Once it's finished it will say "Done"
|
81 |
+
5. In the top left, click the refresh icon next to **Model**.
|
82 |
+
6. In the **Model** dropdown, choose the model you just downloaded: `koala-7B-GPTQ`
|
83 |
+
7. The model will automatically load, and is now ready for use!
|
84 |
+
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
85 |
+
* Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
86 |
+
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
87 |
|
88 |
+
## How to use this GPTQ model from Python code
|
89 |
|
90 |
+
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
91 |
+
|
92 |
+
`GITHUB_ACTIONS=true pip install auto-gptq`
|
93 |
+
|
94 |
+
Then try the following example code:
|
95 |
+
|
96 |
+
```python
|
97 |
+
from transformers import AutoTokenizer, pipeline, logging
|
98 |
+
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
99 |
|
100 |
+
model_name_or_path = "TheBloke/koala-7B-GPTQ"
|
101 |
+
model_basename = "koala-7b-GPTQ-4bit-128g.no-act.order"
|
102 |
|
103 |
+
use_triton = False
|
104 |
|
105 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
106 |
|
107 |
+
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
108 |
+
model_basename=model_basename
|
109 |
+
use_safetensors=True,
|
110 |
+
trust_remote_code=True,
|
111 |
+
device="cuda:0",
|
112 |
+
use_triton=use_triton,
|
113 |
+
quantize_config=None)
|
114 |
|
115 |
+
"""
|
116 |
+
To download from a specific branch, use the revision parameter, as in this example:
|
|
|
|
|
|
|
|
|
117 |
|
118 |
+
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
119 |
+
revision="gptq-4bit-32g-actorder_True",
|
120 |
+
model_basename=model_basename,
|
121 |
+
use_safetensors=True,
|
122 |
+
trust_remote_code=True,
|
123 |
+
device="cuda:0",
|
124 |
+
quantize_config=None)
|
125 |
+
"""
|
126 |
+
|
127 |
+
prompt = "Tell me about AI"
|
128 |
+
prompt_template=f'''BEGINNING OF CONVERSATION:
|
129 |
+
USER: {prompt}
|
130 |
+
GPT:
|
131 |
+
'''
|
132 |
+
|
133 |
+
print("\n\n*** Generate:")
|
134 |
+
|
135 |
+
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
136 |
+
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
137 |
+
print(tokenizer.decode(output[0]))
|
138 |
+
|
139 |
+
# Inference can also be done using transformers' pipeline
|
140 |
+
|
141 |
+
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
142 |
+
logging.set_verbosity(logging.CRITICAL)
|
143 |
+
|
144 |
+
print("*** Pipeline:")
|
145 |
+
pipe = pipeline(
|
146 |
+
"text-generation",
|
147 |
+
model=model,
|
148 |
+
tokenizer=tokenizer,
|
149 |
+
max_new_tokens=512,
|
150 |
+
temperature=0.7,
|
151 |
+
top_p=0.95,
|
152 |
+
repetition_penalty=1.15
|
153 |
+
)
|
154 |
+
|
155 |
+
print(pipe(prompt_template)[0]['generated_text'])
|
156 |
```
|
157 |
|
158 |
+
## Compatibility
|
159 |
+
|
160 |
+
The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
|
161 |
+
|
162 |
+
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
163 |
+
|
164 |
<!-- footer start -->
|
165 |
## Discord
|
166 |
|
167 |
For further support, and discussions on these models and AI in general, join us at:
|
168 |
|
169 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
170 |
|
171 |
## Thanks, and how to contribute.
|
172 |
|
|
|
181 |
* Patreon: https://patreon.com/TheBlokeAI
|
182 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
183 |
|
184 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
|
185 |
+
|
186 |
+
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
|
187 |
|
188 |
Thank you to all my generous patrons and donaters!
|
189 |
+
|
190 |
<!-- footer end -->
|
|
|
191 |
|
192 |
+
# Original model card: Young Geng's Koala 7B
|
193 |
+
|
194 |
+
|
195 |
+
# Koala: A Dialogue Model for Academic Research
|
196 |
+
This repo contains the weights diff against the base LLaMA for the Koala model. Check out the following links to get started:
|
197 |
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
|
198 |
* [Online demo](https://koala.lmsys.org/)
|
199 |
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
|
|
|
205 |
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
|
206 |
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
|
207 |
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|
208 |
+
Please contact us If you find any potential violations. Our training and inference code is released under the Apache License 2.0.
|