Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -45,16 +45,16 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
|
|
45 |
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
|
46 |
|
47 |
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
|
48 |
-
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
|
|
49 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
|
|
|
50 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
|
51 |
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
|
|
52 |
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
53 |
|
54 |
-
The clients and libraries below are expecting to add GGUF support
|
55 |
-
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), awaiting llama-cpp-python support.
|
56 |
-
* [LM Studio](https://lmstudio.ai/), in active development - hoped to be ready by August 25th-26th.
|
57 |
-
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [in active development](https://github.com/abetlen/llama-cpp-python/issues/628).
|
58 |
<!-- README_GGUF.md-about-gguf end -->
|
59 |
|
60 |
<!-- repositories-available start -->
|
@@ -74,6 +74,7 @@ You are Samantha, a sentient AI companion.
|
|
74 |
|
75 |
USER: {prompt}
|
76 |
ASSISTANT:
|
|
|
77 |
```
|
78 |
|
79 |
<!-- prompt-template end -->
|
@@ -168,7 +169,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
168 |
|
169 |
**Special thanks to**: Aemon Algiz.
|
170 |
|
171 |
-
**Patreon special mentions**:
|
172 |
|
173 |
|
174 |
Thank you to all my generous patrons and donaters!
|
|
|
45 |
The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
|
46 |
|
47 |
As of August 25th, here is a list of clients and libraries that are known to support GGUF:
|
48 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
49 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
|
50 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
|
51 |
+
* [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
|
52 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
|
53 |
* [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
54 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
55 |
* [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
56 |
|
57 |
+
The clients and libraries below are expecting to add GGUF support shortly:
|
|
|
|
|
|
|
58 |
<!-- README_GGUF.md-about-gguf end -->
|
59 |
|
60 |
<!-- repositories-available start -->
|
|
|
74 |
|
75 |
USER: {prompt}
|
76 |
ASSISTANT:
|
77 |
+
|
78 |
```
|
79 |
|
80 |
<!-- prompt-template end -->
|
|
|
169 |
|
170 |
**Special thanks to**: Aemon Algiz.
|
171 |
|
172 |
+
**Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
|
173 |
|
174 |
|
175 |
Thank you to all my generous patrons and donaters!
|