Upload README.md
Browse files
README.md
CHANGED
@@ -122,8 +122,45 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
122 |
| [llama-2-70b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
123 |
| [llama-2-70b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
124 |
| [llama-2-70b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
|
|
|
|
125 |
|
126 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
127 |
<!-- README_GGUF.md-provided-files end -->
|
128 |
|
129 |
<!-- README_GGUF.md-how-to-run start -->
|
|
|
122 |
| [llama-2-70b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
123 |
| [llama-2-70b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
124 |
| [llama-2-70b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF/blob/main/llama-2-70b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
125 |
+
| llama-2-70b-chat.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
|
126 |
+
| llama-2-70b-chat.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
|
127 |
|
128 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
129 |
+
|
130 |
+
### Q6_K and Q8_0 files are split and require joining
|
131 |
+
|
132 |
+
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
|
133 |
+
|
134 |
+
<details>
|
135 |
+
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
|
136 |
+
|
137 |
+
### q6_K
|
138 |
+
Please download:
|
139 |
+
* `llama-2-70b-chat.Q6_K.gguf-split-a`
|
140 |
+
* `llama-2-70b-chat.Q6_K.gguf-split-b`
|
141 |
+
|
142 |
+
### q8_0
|
143 |
+
Please download:
|
144 |
+
* `llama-2-70b-chat.Q8_0.gguf-split-a`
|
145 |
+
* `llama-2-70b-chat.Q8_0.gguf-split-b`
|
146 |
+
|
147 |
+
To join the files, do the following:
|
148 |
+
|
149 |
+
Linux and macOS:
|
150 |
+
```
|
151 |
+
cat llama-2-70b-chat.Q6_K.gguf-split-* > llama-2-70b-chat.Q6_K.gguf && rm llama-2-70b-chat.Q6_K.gguf-split-*
|
152 |
+
cat llama-2-70b-chat.Q8_0.gguf-split-* > llama-2-70b-chat.Q8_0.gguf && rm llama-2-70b-chat.Q8_0.gguf-split-*
|
153 |
+
```
|
154 |
+
Windows command line:
|
155 |
+
```
|
156 |
+
COPY /B llama-2-70b-chat.Q6_K.gguf-split-a + llama-2-70b-chat.Q6_K.gguf-split-b llama-2-70b-chat.Q6_K.gguf
|
157 |
+
del llama-2-70b-chat.Q6_K.gguf-split-a fiction.live-Kimiko-V2-70B.Q6_K.gguf-split-b
|
158 |
+
|
159 |
+
COPY /B llama-2-70b-chat.Q8_0.gguf-split-a + llama-2-70b-chat.Q8_0.gguf-split-b llama-2-70b-chat.Q8_0.gguf
|
160 |
+
del llama-2-70b-chat.Q8_0.gguf-split-a llama-2-70b-chat.Q8_0.gguf-split-b
|
161 |
+
```
|
162 |
+
|
163 |
+
</details>
|
164 |
<!-- README_GGUF.md-provided-files end -->
|
165 |
|
166 |
<!-- README_GGUF.md-how-to-run start -->
|