New GGMLv3 format for breaking llama.cpp change May 19th commit 2d5db48
Browse files
README.md
CHANGED
@@ -26,20 +26,22 @@ This model requires the following prompt template:
|
|
26 |
<|assistant|>:
|
27 |
```
|
28 |
|
29 |
-
## REQUIRES LATEST LLAMA.CPP (May
|
30 |
|
31 |
-
llama.cpp recently made
|
32 |
|
33 |
-
I have
|
34 |
|
35 |
-
|
36 |
|
37 |
## Provided files
|
38 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
39 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
40 |
-
`OpenAssistant-30B-epoch7.
|
41 |
-
`OpenAssistant-30B-epoch7.
|
42 |
-
`OpenAssistant-30B-epoch7.
|
|
|
|
|
43 |
|
44 |
|
45 |
## How to run in `llama.cpp`
|
@@ -47,7 +49,7 @@ The previous files, which will still work in older versions of llama.cpp, can be
|
|
47 |
I use the following command line; adjust for your tastes and needs:
|
48 |
|
49 |
```
|
50 |
-
./main -t 18 -m OpenAssistant-30B-epoch7.
|
51 |
```
|
52 |
|
53 |
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
@@ -109,4 +111,4 @@ llama-30b-sft-7:
|
|
109 |
max_val_set: 250
|
110 |
```
|
111 |
|
112 |
-
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|
|
|
26 |
<|assistant|>:
|
27 |
```
|
28 |
|
29 |
+
## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
|
30 |
|
31 |
+
llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
|
32 |
|
33 |
+
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
|
34 |
|
35 |
+
For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
|
36 |
|
37 |
## Provided files
|
38 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
39 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
40 |
+
`OpenAssistant-30B-epoch7.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4-bit. |
|
41 |
+
`OpenAssistant-30B-epoch7.ggmlv3.q4_1.bin` | q4_1 | 4bit | 22.4GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
42 |
+
`OpenAssistant-30B-epoch7.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
43 |
+
`OpenAssistant-30B-epoch7.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
|
44 |
+
`OpenAssistant-30B-epoch7.ggmlv3.q8_9.bin` | q8_0 | 8bit | 24.4GB | 27GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.|
|
45 |
|
46 |
|
47 |
## How to run in `llama.cpp`
|
|
|
49 |
I use the following command line; adjust for your tastes and needs:
|
50 |
|
51 |
```
|
52 |
+
./main -t 18 -m OpenAssistant-30B-epoch7.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
|
53 |
```
|
54 |
|
55 |
Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
|
|
111 |
max_val_set: 250
|
112 |
```
|
113 |
|
114 |
+
- **OASST dataset paper:** https://arxiv.org/abs/2304.07327
|