Transformers
GGUF
llama
Not-For-All-Audiences
TheBloke commited on
Commit
b1179bd
1 Parent(s): 101f946

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -2
README.md CHANGED
@@ -1,12 +1,13 @@
1
  ---
 
2
  datasets:
3
  - jondurbin/airoboros-2.2
4
  inference: false
5
  license: llama2
6
  model_creator: Jon Durbin
7
- model_link: https://huggingface.co/jondurbin/spicyboros-13b-2.2
8
  model_name: Spicyboros 13B 2.2
9
  model_type: llama
 
10
  quantized_by: TheBloke
11
  tags:
12
  - not-for-all-audiences
@@ -33,10 +34,12 @@ tags:
33
  - Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
34
  - Original model: [Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
35
 
 
36
  ## Description
37
 
38
  This repo contains GGUF format model files for [Jon Durbin's Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2).
39
 
 
40
  <!-- README_GGUF.md-about-gguf start -->
41
  ### About GGUF
42
 
@@ -58,6 +61,7 @@ Here is an incomplate list of clients and libraries that are known to support GG
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
61
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GPTQ)
62
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF)
63
  * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
@@ -74,6 +78,8 @@ ASSISTANT:
74
  ```
75
 
76
  <!-- prompt-template end -->
 
 
77
  <!-- compatibility_gguf start -->
78
  ## Compatibility
79
 
@@ -120,6 +126,63 @@ Refer to the Provided Files table below to see what files use which methods, and
120
 
121
  <!-- README_GGUF.md-provided-files end -->
122
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
  <!-- README_GGUF.md-how-to-run start -->
124
  ## Example `llama.cpp` command
125
 
@@ -205,7 +268,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
205
 
206
  **Special thanks to**: Aemon Algiz.
207
 
208
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
209
 
210
 
211
  Thank you to all my generous patrons and donaters!
 
1
  ---
2
+ base_model: https://huggingface.co/jondurbin/spicyboros-13b-2.2
3
  datasets:
4
  - jondurbin/airoboros-2.2
5
  inference: false
6
  license: llama2
7
  model_creator: Jon Durbin
 
8
  model_name: Spicyboros 13B 2.2
9
  model_type: llama
10
+ prompt_template: "A chat.\nUSER: {prompt}\nASSISTANT: \n"
11
  quantized_by: TheBloke
12
  tags:
13
  - not-for-all-audiences
 
34
  - Model creator: [Jon Durbin](https://huggingface.co/jondurbin)
35
  - Original model: [Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
36
 
37
+ <!-- description start -->
38
  ## Description
39
 
40
  This repo contains GGUF format model files for [Jon Durbin's Spicyboros 13B 2.2](https://huggingface.co/jondurbin/spicyboros-13b-2.2).
41
 
42
+ <!-- description end -->
43
  <!-- README_GGUF.md-about-gguf start -->
44
  ### About GGUF
45
 
 
61
  <!-- repositories-available start -->
62
  ## Repositories available
63
 
64
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-AWQ)
65
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GPTQ)
66
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Spicyboros-13B-2.2-GGUF)
67
  * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/spicyboros-13b-2.2)
 
78
  ```
79
 
80
  <!-- prompt-template end -->
81
+
82
+
83
  <!-- compatibility_gguf start -->
84
  ## Compatibility
85
 
 
126
 
127
  <!-- README_GGUF.md-provided-files end -->
128
 
129
+ <!-- README_GGUF.md-how-to-download start -->
130
+ ## How to download GGUF files
131
+
132
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
133
+
134
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
135
+ - LM Studio
136
+ - LoLLMS Web UI
137
+ - Faraday.dev
138
+
139
+ ### In `text-generation-webui`
140
+
141
+ Under Download Model, you can enter the model repo: TheBloke/Spicyboros-13B-2.2-GGUF and below it, a specific filename to download, such as: spicyboros-13b-2.2.q4_K_M.gguf.
142
+
143
+ Then click Download.
144
+
145
+ ### On the command line, including multiple files at once
146
+
147
+ I recommend using the `huggingface-hub` Python library:
148
+
149
+ ```shell
150
+ pip3 install huggingface-hub>=0.17.1
151
+ ```
152
+
153
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
154
+
155
+ ```shell
156
+ huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF spicyboros-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
157
+ ```
158
+
159
+ <details>
160
+ <summary>More advanced huggingface-cli download usage</summary>
161
+
162
+ You can also download multiple files at once with a pattern:
163
+
164
+ ```shell
165
+ huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
166
+ ```
167
+
168
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
169
+
170
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
171
+
172
+ ```shell
173
+ pip3 install hf_transfer
174
+ ```
175
+
176
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
177
+
178
+ ```shell
179
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Spicyboros-13B-2.2-GGUF spicyboros-13b-2.2.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
180
+ ```
181
+
182
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
183
+ </details>
184
+ <!-- README_GGUF.md-how-to-download end -->
185
+
186
  <!-- README_GGUF.md-how-to-run start -->
187
  ## Example `llama.cpp` command
188
 
 
268
 
269
  **Special thanks to**: Aemon Algiz.
270
 
271
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
272
 
273
 
274
  Thank you to all my generous patrons and donaters!