TheBloke commited on
Commit
776ccd8
1 Parent(s): 7dd6759

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -52
README.md CHANGED
@@ -3,7 +3,7 @@ inference: false
3
  language:
4
  - en
5
  library_name: transformers
6
- license: other
7
  model_creator: augtoma
8
  model_link: https://huggingface.co/augtoma/qCammel-70-x
9
  model_name: qCammel 70
@@ -18,17 +18,20 @@ tags:
18
  ---
19
 
20
  <!-- header start -->
21
- <div style="width: 100%;">
22
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
23
  </div>
24
  <div style="display: flex; justify-content: space-between; width: 100%;">
25
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
26
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
27
  </div>
28
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
29
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
30
  </div>
31
  </div>
 
 
32
  <!-- header end -->
33
 
34
  # qCammel 70 - GGML
@@ -39,7 +42,15 @@ tags:
39
 
40
  This repo contains GGML format model files for [augtoma's qCammel 70](https://huggingface.co/augtoma/qCammel-70-x).
41
 
42
- GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with CUDA GPU acceleration:
 
 
 
 
 
 
 
 
43
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
44
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
45
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
@@ -50,22 +61,25 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
50
  ## Repositories available
51
 
52
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/qCammel-70-x-GPTQ)
53
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/qCammel-70-x-GGML)
 
54
  * [augtoma's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/augtoma/qCammel-70-x)
55
 
56
  ## Prompt template: Vicuna
57
 
58
  ```
59
- A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
60
 
61
- USER: {prompt}
62
- ASSISTANT:
63
  ```
64
 
65
  <!-- compatibility_ggml start -->
66
  ## Compatibility
67
 
68
- ### Requires llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
 
 
 
 
69
 
70
  Or one of the other tools and libraries listed above.
71
 
@@ -94,70 +108,48 @@ Refer to the Provided Files table below to see what files use which methods, and
94
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
95
  | ---- | ---- | ---- | ---- | ---- | ----- |
96
  | [qcammel-70-x.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
97
- | [qcammel-70-x.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
98
- | [qcammel-70-x.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
99
  | [qcammel-70-x.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
100
  | [qcammel-70-x.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
101
- | [qcammel-70-x.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
102
- | [qcammel-70-x.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
103
  | [qcammel-70-x.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
104
  | [qcammel-70-x.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
105
- | [qcammel-70-x.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
106
  | [qcammel-70-x.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
107
- | qcammel-70-x.ggmlv3.q5_1.bin | q5_1 | 5 | 51.76 GB | 54.26 GB | Original quant method, 5-bit. Higher accuracy, slower inference than q5_0. |
108
- | qcammel-70-x.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
109
- | qcammel-70-x.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
110
 
111
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
112
 
113
- ### q5_1, q6_K and q8_0 files require expansion from archive
114
 
115
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
116
 
117
- <details>
118
- <summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
119
-
120
- ### q5_1
121
- Please download:
122
- * `qcammel-70-x.ggmlv3.q5_1.zip`
123
- * `qcammel-70-x.ggmlv3.q5_1.z01`
124
-
125
- ### q6_K
126
- Please download:
127
- * `qcammel-70-x.ggmlv3.q6_K.zip`
128
- * `qcammel-70-x.ggmlv3.q6_K.z01`
129
-
130
- ### q8_0
131
- Please download:
132
- * `qcammel-70-x.ggmlv3.q8_0.zip`
133
- * `qcammel-70-x.ggmlv3.q8_0.z01`
134
-
135
- Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
136
- ```
137
- sudo apt update -y && sudo apt install 7zip
138
- 7zz x qcammel-70-x.ggmlv3.q6_K.zip
139
- </details>
140
-
141
- ## How to run in `llama.cpp`
142
 
143
  I use the following command line; adjust for your tastes and needs:
144
 
145
  ```
146
- ./main -t 10 -ngl 40 -gqa 8 -m qcammel-70-x.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\nUSER: Write a story about llamas\nASSISTANT:"
147
  ```
148
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
 
 
149
 
150
- Change -ngl 40 to the number of GPU layers you have VRAM for. Use -ngl 100 to offload all layers to VRAM, if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
151
 
152
  Remember the `-gqa 8` argument, required for Llama 70B models.
153
 
154
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
 
 
155
 
156
  ## How to run in `text-generation-webui`
157
 
158
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
159
 
160
  <!-- footer start -->
 
161
  ## Discord
162
 
163
  For further support, and discussions on these models and AI in general, join us at:
@@ -177,13 +169,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
177
  * Patreon: https://patreon.com/TheBlokeAI
178
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
179
 
180
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
181
 
182
- **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
183
 
184
 
185
  Thank you to all my generous patrons and donaters!
186
 
 
 
187
  <!-- footer end -->
188
 
189
  # Original model card: augtoma's qCammel 70
 
3
  language:
4
  - en
5
  library_name: transformers
6
+ license: llama2
7
  model_creator: augtoma
8
  model_link: https://huggingface.co/augtoma/qCammel-70-x
9
  model_name: qCammel 70
 
18
  ---
19
 
20
  <!-- header start -->
21
+ <!-- 200823 -->
22
+ <div style="width: auto; margin-left: auto; margin-right: auto">
23
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
24
  </div>
25
  <div style="display: flex; justify-content: space-between; width: 100%;">
26
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
27
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
28
  </div>
29
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
31
  </div>
32
  </div>
33
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
34
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
35
  <!-- header end -->
36
 
37
  # qCammel 70 - GGML
 
42
 
43
  This repo contains GGML format model files for [augtoma's qCammel 70](https://huggingface.co/augtoma/qCammel-70-x).
44
 
45
+ ### Important note regarding GGML files.
46
+
47
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
48
+
49
+ Please use the GGUF models instead.
50
+
51
+ ### About GGML
52
+
53
+ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
54
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
55
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
56
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
 
61
  ## Repositories available
62
 
63
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/qCammel-70-x-GPTQ)
64
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/qCammel-70-x-GGUF)
65
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/qCammel-70-x-GGML)
66
  * [augtoma's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/augtoma/qCammel-70-x)
67
 
68
  ## Prompt template: Vicuna
69
 
70
  ```
71
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
72
 
 
 
73
  ```
74
 
75
  <!-- compatibility_ggml start -->
76
  ## Compatibility
77
 
78
+ ### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
79
+
80
+ Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
81
+
82
+ For compatibility with latest llama.cpp, please use GGUF files instead.
83
 
84
  Or one of the other tools and libraries listed above.
85
 
 
108
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
109
  | ---- | ---- | ---- | ---- | ---- | ----- |
110
  | [qcammel-70-x.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
111
  | [qcammel-70-x.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
112
+ | [qcammel-70-x.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
113
+ | [qcammel-70-x.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
114
  | [qcammel-70-x.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
 
 
115
  | [qcammel-70-x.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
116
+ | [qcammel-70-x.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
117
+ | [qcammel-70-x.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
118
  | [qcammel-70-x.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
119
  | [qcammel-70-x.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
120
+ | [qcammel-70-x.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/qCammel-70-x-GGML/blob/main/qcammel-70-x.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
 
 
121
 
122
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
123
 
124
+ ## How to run in `llama.cpp`
125
 
126
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
127
 
128
+ For compatibility with latest llama.cpp, please use GGUF files instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
 
130
  I use the following command line; adjust for your tastes and needs:
131
 
132
  ```
133
+ ./main -t 10 -ngl 40 -gqa 8 -m qcammel-70-x.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Write a story about llamas ASSISTANT:"
134
  ```
135
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
136
+
137
+ Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
138
 
139
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
140
 
141
  Remember the `-gqa 8` argument, required for Llama 70B models.
142
 
143
+ Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
144
+
145
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
146
 
147
  ## How to run in `text-generation-webui`
148
 
149
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
150
 
151
  <!-- footer start -->
152
+ <!-- 200823 -->
153
  ## Discord
154
 
155
  For further support, and discussions on these models and AI in general, join us at:
 
169
  * Patreon: https://patreon.com/TheBlokeAI
170
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
171
 
172
+ **Special thanks to**: Aemon Algiz.
173
 
174
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
175
 
176
 
177
  Thank you to all my generous patrons and donaters!
178
 
179
+ And thank you again to a16z for their generous grant.
180
+
181
  <!-- footer end -->
182
 
183
  # Original model card: augtoma's qCammel 70