New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -15,27 +15,29 @@ GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/
|
|
15 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML).
|
16 |
* [float16 HF model for unquantised and 8bit GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF).
|
17 |
|
18 |
-
## REQUIRES LATEST LLAMA.CPP (May
|
19 |
|
20 |
-
llama.cpp recently made
|
21 |
|
22 |
-
I have
|
23 |
|
24 |
-
|
25 |
|
26 |
## Provided files
|
27 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
28 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
29 |
-
`gpt4-x-vicuna-13B.
|
30 |
-
`gpt4-x-vicuna-13B.
|
31 |
-
`gpt4-x-vicuna-13B.
|
|
|
|
|
32 |
|
33 |
## How to run in `llama.cpp`
|
34 |
|
35 |
I use the following command line; adjust for your tastes and needs:
|
36 |
|
37 |
```
|
38 |
-
./main -t 12 -m gpt4-x-vicuna-13B.
|
39 |
### Instruction:
|
40 |
Write a story about llamas
|
41 |
### Response:"
|
@@ -48,9 +50,7 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
|
|
48 |
|
49 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
50 |
|
51 |
-
Note: at this time text-generation-webui
|
52 |
-
|
53 |
-
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that you can get support for the new llama.cpp quantisation methods sooner.
|
54 |
|
55 |
# Original model card
|
56 |
|
@@ -72,4 +72,4 @@ Wizard LM by https://github.com/nlpxucan
|
|
72 |
|
73 |
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin
|
74 |
|
75 |
-
Compute provided by our project sponsor https://redmond.ai/
|
|
|
15 |
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML).
|
16 |
* [float16 HF model for unquantised and 8bit GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF).
|
17 |
|
18 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
19 |
|
20 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
21 |
|
22 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
23 |
|
24 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
25 |
|
26 |
## Provided files
|
27 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
28 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
29 |
+
`gpt4-x-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10GB | 4-bit. |
|
30 |
+
`gpt4-x-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 10GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
|
31 |
+
`gpt4-x-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
32 |
+
`gpt4-x-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
|
33 |
+
`gpt4-x-vicuna-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 16GB | 18GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.|
|
34 |
|
35 |
## How to run in `llama.cpp`
|
36 |
|
37 |
I use the following command line; adjust for your tastes and needs:
|
38 |
|
39 |
```
|
40 |
+
./main -t 12 -m gpt4-x-vicuna-13B.ggmlv3.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
41 |
### Instruction:
|
42 |
Write a story about llamas
|
43 |
### Response:"
|
|
|
50 |
|
51 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
52 |
|
53 |
+
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
|
|
|
|
54 |
|
55 |
# Original model card
|
56 |
|
|
|
72 |
|
73 |
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin
|
74 |
|
75 |
+
Compute provided by our project sponsor https://redmond.ai/
|