Update README.md
Browse files
README.md
CHANGED
@@ -20,29 +20,32 @@ I have the following Koala model repositories available:
|
|
20 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
21 |
* [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
|
22 |
|
23 |
-
## Quantization method
|
24 |
-
|
25 |
-
This GPTQ model was quantized using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) with the following commands:
|
26 |
-
```
|
27 |
-
python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save /content/koala-7B-4bit-128g.pt
|
28 |
-
python3 llama.py /content/koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors /content/koala-7B-4bit-128g.safetensors
|
29 |
-
```
|
30 |
-
|
31 |
-
I used the latest Triton branch of `GPTQ-for-LLaMa` but they can also be loaded with the CUDA branch.
|
32 |
-
|
33 |
## Provided files
|
34 |
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
|
48 |
```
|
@@ -55,12 +58,12 @@ ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
|
|
55 |
Then install this model into `text-generation-webui/models` and launch the UI as follows:
|
56 |
```
|
57 |
cd text-generation-webui
|
58 |
-
python server.py --model koala-
|
59 |
```
|
60 |
|
61 |
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
|
62 |
|
63 |
-
If you cannot use the Triton branch of GPTQ for any reason, you can
|
64 |
```
|
65 |
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
|
66 |
cd GPTQ-for-LLaMa
|
@@ -68,6 +71,8 @@ python setup_cuda.py install
|
|
68 |
```
|
69 |
Then link that into `text-generation-webui/repositories` as described above.
|
70 |
|
|
|
|
|
71 |
## How the Koala delta weights were merged
|
72 |
|
73 |
The Koala delta weights were originally merged using the following commands, producing [koala-7B-HF](https://huggingface.co/TheBloke/koala-7B-HF):
|
|
|
20 |
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
|
21 |
* [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Provided files
|
24 |
|
25 |
+
Three model files are provided. You don't need all three - choose the one that suits your needs best!
|
26 |
+
|
27 |
+
Details of the files provided:
|
28 |
+
* `koala-7B-4bit-128g.pt`
|
29 |
+
* pt format file, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
|
30 |
+
* Command to create:
|
31 |
+
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save koala-7B-4bit-128g.pt`
|
32 |
+
* `koala-7B-4bit-128g.safetensors`
|
33 |
+
* newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
|
34 |
+
* Command to create:
|
35 |
+
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-7B-4bit-128g.safetensors`
|
36 |
+
* `koala-7B-4bit-128g.no-act-order.ooba.pt`
|
37 |
+
* `pt` format file, created with [oobabooga's older CUDA fork of GPTQ-for-LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa).
|
38 |
+
* This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
|
39 |
+
* It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
|
40 |
+
* The older GPTQ code does not support all the latest features, so the quality may be fractionally lower.
|
41 |
+
* Command to create:
|
42 |
+
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-7B-4bit-128g.no-act-order.ooba.pt`
|
43 |
+
|
44 |
+
## How to run in `text-generation-webui`
|
45 |
+
|
46 |
+
File `koala-7B-4bit-128g.no-act-order.ooba.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
47 |
+
|
48 |
+
The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI.
|
49 |
|
50 |
Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
|
51 |
```
|
|
|
58 |
Then install this model into `text-generation-webui/models` and launch the UI as follows:
|
59 |
```
|
60 |
cd text-generation-webui
|
61 |
+
python server.py --model koala-13B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
|
62 |
```
|
63 |
|
64 |
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
|
65 |
|
66 |
+
If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
|
67 |
```
|
68 |
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
|
69 |
cd GPTQ-for-LLaMa
|
|
|
71 |
```
|
72 |
Then link that into `text-generation-webui/repositories` as described above.
|
73 |
|
74 |
+
Or just use `koala-7B-4bit-128g.no-act-order.ooba.pt` as mentioned above.
|
75 |
+
|
76 |
## How the Koala delta weights were merged
|
77 |
|
78 |
The Koala delta weights were originally merged using the following commands, producing [koala-7B-HF](https://huggingface.co/TheBloke/koala-7B-HF):
|