Transformers
GGUF
llama
uncensored
TheBloke commited on
Commit
4a0868b
1 Parent(s): d44f1aa

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -12
README.md CHANGED
@@ -80,15 +80,8 @@ A chat between a curious user and an artificial intelligence assistant. The assi
80
  ```
81
 
82
  <!-- prompt-template end -->
83
- <!-- licensing start -->
84
- ## Licensing
85
 
86
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
87
 
88
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
89
-
90
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Eric Hartford's Wizardlm 7B Uncensored](https://huggingface.co/ehartford/WizardLM-7B-Uncensored).
91
- <!-- licensing end -->
92
  <!-- compatibility_gguf start -->
93
  ## Compatibility
94
 
@@ -147,7 +140,7 @@ The following clients/libraries will automatically download models for you, prov
147
 
148
  ### In `text-generation-webui`
149
 
150
- Under Download Model, you can enter the model repo: TheBloke/WizardLM-7B-uncensored-GGUF and below it, a specific filename to download, such as: WizardLM-7B-uncensored.q4_K_M.gguf.
151
 
152
  Then click Download.
153
 
@@ -162,7 +155,7 @@ pip3 install huggingface-hub>=0.17.1
162
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
163
 
164
  ```shell
165
- huggingface-cli download TheBloke/WizardLM-7B-uncensored-GGUF WizardLM-7B-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
166
  ```
167
 
168
  <details>
@@ -185,7 +178,7 @@ pip3 install hf_transfer
185
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
186
 
187
  ```shell
188
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-7B-uncensored-GGUF WizardLM-7B-uncensored.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
189
  ```
190
 
191
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -198,7 +191,7 @@ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running
198
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
199
 
200
  ```shell
201
- ./main -ngl 32 -m WizardLM-7B-uncensored.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
202
  ```
203
 
204
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
@@ -238,7 +231,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
238
  from ctransformers import AutoModelForCausalLM
239
 
240
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
241
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-7B-uncensored-GGUF", model_file="WizardLM-7B-uncensored.q4_K_M.gguf", model_type="llama", gpu_layers=50)
242
 
243
  print(llm("AI is going to"))
244
  ```
 
80
  ```
81
 
82
  <!-- prompt-template end -->
 
 
83
 
 
84
 
 
 
 
 
85
  <!-- compatibility_gguf start -->
86
  ## Compatibility
87
 
 
140
 
141
  ### In `text-generation-webui`
142
 
143
+ Under Download Model, you can enter the model repo: TheBloke/WizardLM-7B-uncensored-GGUF and below it, a specific filename to download, such as: WizardLM-7B-uncensored.Q4_K_M.gguf.
144
 
145
  Then click Download.
146
 
 
155
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
156
 
157
  ```shell
158
+ huggingface-cli download TheBloke/WizardLM-7B-uncensored-GGUF WizardLM-7B-uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
159
  ```
160
 
161
  <details>
 
178
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
179
 
180
  ```shell
181
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-7B-uncensored-GGUF WizardLM-7B-uncensored.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
182
  ```
183
 
184
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
191
  Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
192
 
193
  ```shell
194
+ ./main -ngl 32 -m WizardLM-7B-uncensored.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
195
  ```
196
 
197
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
231
  from ctransformers import AutoModelForCausalLM
232
 
233
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
234
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-7B-uncensored-GGUF", model_file="WizardLM-7B-uncensored.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
235
 
236
  print(llm("AI is going to"))
237
  ```