Update README.md
Browse files
README.md
CHANGED
@@ -43,6 +43,14 @@ GGML versions are not yet provided, as there is not yet support for SuperHOT in
|
|
43 |
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/airoboros-13b-gpt4-1.4-SuperHOT-8K-fp16)
|
44 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4)
|
45 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
47 |
|
48 |
Please make sure you're using the latest version of text-generation-webui
|
|
|
43 |
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/airoboros-13b-gpt4-1.4-SuperHOT-8K-fp16)
|
44 |
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4)
|
45 |
|
46 |
+
## Prompt template
|
47 |
+
|
48 |
+
```
|
49 |
+
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input
|
50 |
+
USER: prompt
|
51 |
+
ASSISTANT:
|
52 |
+
```
|
53 |
+
|
54 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
55 |
|
56 |
Please make sure you're using the latest version of text-generation-webui
|