Upload README.md
Browse files
README.md
CHANGED
@@ -36,23 +36,25 @@ tags:
|
|
36 |
- Model creator: [Jarrad Hope](https://huggingface.co/jarradh)
|
37 |
- Original model: [Llama2 70B Chat Uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored)
|
38 |
|
|
|
39 |
## Description
|
40 |
|
41 |
This repo contains GGUF format model files for [Jarrad Hope's Llama2 70B Chat Uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored).
|
42 |
|
|
|
43 |
<!-- README_GGUF.md-about-gguf start -->
|
44 |
### About GGUF
|
45 |
|
46 |
-
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
-
* [
|
52 |
-
* [
|
53 |
-
* [
|
54 |
-
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
|
55 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
|
|
56 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
57 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
58 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
@@ -79,12 +81,21 @@ Here are a list of clients and libraries that are known to support GGUF:
|
|
79 |
```
|
80 |
|
81 |
<!-- prompt-template end -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
82 |
<!-- compatibility_gguf start -->
|
83 |
## Compatibility
|
84 |
|
85 |
-
These quantised
|
86 |
|
87 |
-
They are
|
88 |
|
89 |
## Explanation of quantisation methods
|
90 |
<details>
|
@@ -106,8 +117,9 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
106 |
|
107 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
108 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
109 |
-
| [llama2_70b_chat_uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q8_0.gguf) | Q8_0 | 8 | 0.00 GB| 2.50 GB | very large, extremely low quality loss - not recommended |
|
110 |
| [llama2_70b_chat_uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q6_K.gguf) | Q6_K | 6 | 0.00 GB| 2.50 GB | very large, extremely low quality loss |
|
|
|
|
|
111 |
| [llama2_70b_chat_uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
112 |
| [llama2_70b_chat_uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
113 |
| [llama2_70b_chat_uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
@@ -115,7 +127,6 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
115 |
| [llama2_70b_chat_uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
116 |
| [llama2_70b_chat_uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
117 |
| [llama2_70b_chat_uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
118 |
-
| [llama2_70b_chat_uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 44.56 GB| 47.06 GB | large, very low quality loss - recommended |
|
119 |
| [llama2_70b_chat_uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
120 |
| [llama2_70b_chat_uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
121 |
|
@@ -128,18 +139,15 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
128 |
<!-- README_GGUF.md-how-to-run start -->
|
129 |
## Example `llama.cpp` command
|
130 |
|
131 |
-
Make sure you are using `llama.cpp` from commit [
|
132 |
-
|
133 |
-
For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
|
134 |
|
|
|
|
|
135 |
```
|
136 |
-
./main -t 10 -ngl 32 -m llama2_70b_chat_uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### HUMAN:\n{prompt}\n\n### RESPONSE:"
|
137 |
-
```
|
138 |
-
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
|
139 |
|
140 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
141 |
|
142 |
-
Change `-c 4096` to the desired sequence length
|
143 |
|
144 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
145 |
|
@@ -196,10 +204,12 @@ For further support, and discussions on these models and AI in general, join us
|
|
196 |
|
197 |
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
198 |
|
199 |
-
## Thanks, and how to contribute
|
200 |
|
201 |
Thanks to the [chirper.ai](https://chirper.ai) team!
|
202 |
|
|
|
|
|
203 |
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
204 |
|
205 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
|
|
36 |
- Model creator: [Jarrad Hope](https://huggingface.co/jarradh)
|
37 |
- Original model: [Llama2 70B Chat Uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored)
|
38 |
|
39 |
+
<!-- description start -->
|
40 |
## Description
|
41 |
|
42 |
This repo contains GGUF format model files for [Jarrad Hope's Llama2 70B Chat Uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored).
|
43 |
|
44 |
+
<!-- description end -->
|
45 |
<!-- README_GGUF.md-about-gguf start -->
|
46 |
### About GGUF
|
47 |
|
48 |
+
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
|
49 |
|
50 |
+
Here is an incomplate list of clients and libraries that are known to support GGUF:
|
51 |
|
52 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
|
53 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
|
54 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
|
55 |
+
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
|
|
|
56 |
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
|
57 |
+
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
|
58 |
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
|
59 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
|
60 |
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
|
|
|
81 |
```
|
82 |
|
83 |
<!-- prompt-template end -->
|
84 |
+
<!-- licensing start -->
|
85 |
+
## Licensing
|
86 |
+
|
87 |
+
The creator of the source model has listed its license as `llama2`, and this quantization has therefore used that same license.
|
88 |
+
|
89 |
+
As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
|
90 |
+
|
91 |
+
In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jarrad Hope's Llama2 70B Chat Uncensored](https://huggingface.co/jarradh/llama2_70b_chat_uncensored).
|
92 |
+
<!-- licensing end -->
|
93 |
<!-- compatibility_gguf start -->
|
94 |
## Compatibility
|
95 |
|
96 |
+
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
|
97 |
|
98 |
+
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
|
99 |
|
100 |
## Explanation of quantisation methods
|
101 |
<details>
|
|
|
117 |
|
118 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
119 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
120 |
| [llama2_70b_chat_uncensored.Q6_K.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q6_K.gguf) | Q6_K | 6 | 0.00 GB| 2.50 GB | very large, extremely low quality loss |
|
121 |
+
| [llama2_70b_chat_uncensored.Q8_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q8_0.gguf) | Q8_0 | 8 | 0.00 GB| 2.50 GB | very large, extremely low quality loss - not recommended |
|
122 |
+
| [llama2_70b_chat_uncensored.Q5_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_K_M.gguf) | Q5_K_M | 5 | 1.86 GB| 4.36 GB | large, very low quality loss - recommended |
|
123 |
| [llama2_70b_chat_uncensored.Q2_K.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
124 |
| [llama2_70b_chat_uncensored.Q3_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
125 |
| [llama2_70b_chat_uncensored.Q3_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
|
|
127 |
| [llama2_70b_chat_uncensored.Q4_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
128 |
| [llama2_70b_chat_uncensored.Q4_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
129 |
| [llama2_70b_chat_uncensored.Q4_K_M.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
|
|
130 |
| [llama2_70b_chat_uncensored.Q5_0.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
131 |
| [llama2_70b_chat_uncensored.Q5_K_S.gguf](https://huggingface.co/TheBloke/llama2_70b_chat_uncensored-GGUF/blob/main/llama2_70b_chat_uncensored.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
132 |
|
|
|
139 |
<!-- README_GGUF.md-how-to-run start -->
|
140 |
## Example `llama.cpp` command
|
141 |
|
142 |
+
Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
|
|
|
|
|
143 |
|
144 |
+
```shell
|
145 |
+
./main -ngl 32 -m llama2_70b_chat_uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### HUMAN:\n{prompt}\n\n### RESPONSE:"
|
146 |
```
|
|
|
|
|
|
|
147 |
|
148 |
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
|
149 |
|
150 |
+
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
|
151 |
|
152 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
153 |
|
|
|
204 |
|
205 |
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
206 |
|
207 |
+
## Thanks, and how to contribute
|
208 |
|
209 |
Thanks to the [chirper.ai](https://chirper.ai) team!
|
210 |
|
211 |
+
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
|
212 |
+
|
213 |
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
214 |
|
215 |
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|