TheBloke commited on
Commit
861096f
1 Parent(s): 376c4f0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -16
README.md CHANGED
@@ -42,7 +42,7 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
42
 
43
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
44
 
45
- As of August 25th, here is a list of clients and libraries that are known to support GGUF:
46
  * [llama.cpp](https://github.com/ggerganov/llama.cpp).
47
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
48
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
@@ -52,9 +52,7 @@ As of August 25th, here is a list of clients and libraries that are known to sup
52
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
53
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
54
 
55
- The clients and libraries below are expecting to add GGUF support shortly:
56
  <!-- README_GGUF.md-about-gguf end -->
57
-
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
@@ -79,9 +77,7 @@ ASSISTANT:
79
 
80
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
81
 
82
- As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
83
-
84
- They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
85
 
86
  ## Explanation of quantisation methods
87
  <details>
@@ -103,31 +99,36 @@ Refer to the Provided Files table below to see what files use which methods, and
103
 
104
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
105
  | ---- | ---- | ---- | ---- | ---- | ----- |
 
 
 
106
  | [airoboros-c34b-2.1.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
107
  | [airoboros-c34b-2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
108
  | [airoboros-c34b-2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
109
  | [airoboros-c34b-2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
 
110
  | [airoboros-c34b-2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
111
  | [airoboros-c34b-2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
 
112
  | [airoboros-c34b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
113
- | [airoboros-c34b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 23.84 GB| 26.34 GB | large, very low quality loss - recommended |
114
- | [airoboros-c34b-2.1.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q6_K.gguf) | Q6_K | 6 | 27.68 GB| 30.18 GB | very large, extremely low quality loss |
115
- | [airoboros-c34b-2.1.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q8_0.gguf) | Q8_0 | 8 | 35.86 GB| 38.36 GB | very large, extremely low quality loss - not recommended |
116
 
117
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
118
  <!-- README_GGUF.md-provided-files end -->
119
 
120
  <!-- README_GGUF.md-how-to-run start -->
121
- ## How to run in `llama.cpp`
122
 
123
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
124
 
125
- For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
126
 
127
  ```
128
  ./main -t 10 -ngl 32 -m airoboros-c34b-2.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: Write a story about llamas\nASSISTANT:"
129
  ```
130
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
131
 
132
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
133
 
@@ -140,6 +141,44 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
140
  ## How to run in `text-generation-webui`
141
 
142
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  <!-- README_GGUF.md-how-to-run end -->
144
 
145
  <!-- footer start -->
@@ -165,7 +204,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
165
 
166
  **Special thanks to**: Aemon Algiz.
167
 
168
- **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
169
 
170
 
171
  Thank you to all my generous patrons and donaters!
@@ -211,7 +250,7 @@ https://github.com/jondurbin/qlora/blob/main/qlora.py#L559C1-L559C1
211
 
212
 
213
  ```
214
- A chat. USER: {prompt} ASSISTANT:
215
  ```
216
 
217
  or
@@ -357,7 +396,7 @@ A group of people decide to play a round-robin tournament where every player pla
357
  Example response:
358
  ```
359
  Solution 1:
360
- In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
361
 
362
  The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
363
 
@@ -368,7 +407,7 @@ Solving this equation gives us n=10.
368
  Final answer: There were 10 players in the tournament.
369
 
370
  Solution 2:
371
- Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
372
 
373
  If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
374
 
 
42
 
43
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
44
 
45
+ Here are a list of clients and libraries that are known to support GGUF:
46
  * [llama.cpp](https://github.com/ggerganov/llama.cpp).
47
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
48
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
 
52
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
53
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
54
 
 
55
  <!-- README_GGUF.md-about-gguf end -->
 
56
  <!-- repositories-available start -->
57
  ## Repositories available
58
 
 
77
 
78
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
79
 
80
+ They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
 
 
81
 
82
  ## Explanation of quantisation methods
83
  <details>
 
99
 
100
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
101
  | ---- | ---- | ---- | ---- | ---- | ----- |
102
+ | [airoboros-c34b-2.1.Q6_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q6_K.gguf) | Q6_K | 6 | 0.00 GB| 2.50 GB | very large, extremely low quality loss |
103
+ | [airoboros-c34b-2.1.Q8_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q8_0.gguf) | Q8_0 | 8 | 0.00 GB| 2.50 GB | very large, extremely low quality loss - not recommended |
104
+ | [airoboros-c34b-2.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_M.gguf) | Q5_K_M | 5 | 13.02 GB| 15.52 GB | large, very low quality loss - recommended |
105
  | [airoboros-c34b-2.1.Q2_K.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q2_K.gguf) | Q2_K | 2 | 14.21 GB| 16.71 GB | smallest, significant quality loss - not recommended for most purposes |
106
  | [airoboros-c34b-2.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_S.gguf) | Q3_K_S | 3 | 14.61 GB| 17.11 GB | very small, high quality loss |
107
  | [airoboros-c34b-2.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_M.gguf) | Q3_K_M | 3 | 16.28 GB| 18.78 GB | very small, high quality loss |
108
  | [airoboros-c34b-2.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q3_K_L.gguf) | Q3_K_L | 3 | 17.77 GB| 20.27 GB | small, substantial quality loss |
109
+ | [airoboros-c34b-2.1.Q4_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q4_0.gguf) | Q4_0 | 4 | 19.05 GB| 21.55 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
110
  | [airoboros-c34b-2.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q4_K_S.gguf) | Q4_K_S | 4 | 19.15 GB| 21.65 GB | small, greater quality loss |
111
  | [airoboros-c34b-2.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q4_K_M.gguf) | Q4_K_M | 4 | 20.22 GB| 22.72 GB | medium, balanced quality - recommended |
112
+ | [airoboros-c34b-2.1.Q5_0.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_0.gguf) | Q5_0 | 5 | 23.24 GB| 25.74 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
113
  | [airoboros-c34b-2.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Airoboros-c34B-2.1-GGUF/blob/main/airoboros-c34b-2.1.Q5_K_S.gguf) | Q5_K_S | 5 | 23.24 GB| 25.74 GB | large, low quality loss - recommended |
 
 
 
114
 
115
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
116
+
117
+
118
+
119
  <!-- README_GGUF.md-provided-files end -->
120
 
121
  <!-- README_GGUF.md-how-to-run start -->
122
+ ## Example `llama.cpp` command
123
 
124
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
125
 
126
+ For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
127
 
128
  ```
129
  ./main -t 10 -ngl 32 -m airoboros-c34b-2.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat.\nUSER: Write a story about llamas\nASSISTANT:"
130
  ```
131
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
132
 
133
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
134
 
 
141
  ## How to run in `text-generation-webui`
142
 
143
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
144
+
145
+ ## How to run from Python code
146
+
147
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
148
+
149
+ ### How to load this model from Python using ctransformers
150
+
151
+ #### First install the package
152
+
153
+ ```bash
154
+ # Base ctransformers with no GPU acceleration
155
+ pip install ctransformers>=0.2.24
156
+ # Or with CUDA GPU acceleration
157
+ pip install ctransformers[cuda]>=0.2.24
158
+ # Or with ROCm GPU acceleration
159
+ CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
160
+ # Or with Metal GPU acceleration for macOS systems
161
+ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
162
+ ```
163
+
164
+ #### Simple example code to load one of these GGUF models
165
+
166
+ ```python
167
+ from ctransformers import AutoModelForCausalLM
168
+
169
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
170
+ llm = AutoModelForCausalLM.from_pretrained("None", model_file="airoboros-c34b-2.1.q4_K_M.gguf", model_type="llama", gpu_layers=50)
171
+
172
+ print(llm("AI is going to"))
173
+ ```
174
+
175
+ ## How to use with LangChain
176
+
177
+ Here's guides on using llama-cpp-python or ctransformers with LangChain:
178
+
179
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
180
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
181
+
182
  <!-- README_GGUF.md-how-to-run end -->
183
 
184
  <!-- footer start -->
 
204
 
205
  **Special thanks to**: Aemon Algiz.
206
 
207
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
208
 
209
 
210
  Thank you to all my generous patrons and donaters!
 
250
 
251
 
252
  ```
253
+ A chat. USER: {prompt} ASSISTANT:
254
  ```
255
 
256
  or
 
396
  Example response:
397
  ```
398
  Solution 1:
399
+ In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
400
 
401
  The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
402
 
 
407
  Final answer: There were 10 players in the tournament.
408
 
409
  Solution 2:
410
+ Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
411
 
412
  If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
413