Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
tags:
|
4 |
+
- llama
|
5 |
+
- pytorch
|
6 |
+
- chatbot
|
7 |
+
- storywriting
|
8 |
+
- generalist-model
|
9 |
+
---
|
10 |
+
|
11 |
+
# chronos-13b-v2
|
12 |
+
|
13 |
+
This is the FP16 PyTorch / HF version of **chronos-13b-v2** based on the **LLaMA v2** model.
|
14 |
+
|
15 |
+
Only use this version for further quantization or if you would like to run in full precision, as long as you have the VRAM required.
|
16 |
+
|
17 |
+
This model is primarily focused on chat, roleplay, storywriting, with good reasoning and logic.
|
18 |
+
|
19 |
+
Chronos can generate very long outputs with coherent text, largely due to the human inputs it was trained on, and it supports context length up to 4096 tokens.
|
20 |
+
|
21 |
+
This model uses Alpaca formatting, so for optimal model performance, use and either use a frontend like SillyTavern, or continue your story with it:
|
22 |
+
```
|
23 |
+
### Instruction:
|
24 |
+
Your instruction or question here.
|
25 |
+
### Response:
|
26 |
+
```
|
27 |
+
Not using the format will make the model perform significantly worse than intended.
|
28 |
+
|
29 |
+
## Other Versions
|
30 |
+
[4bit GPTQ Quantized version](https://huggingface.co/elinas/chronos-13b-v2-GPTQ)
|
31 |
+
|
32 |
+
[GGML Versions provided by @TheBloke]()
|
33 |
+
|
34 |
+
## Notes on ggml usage
|
35 |
+
Note: If you wish to use and quantize this model yourself using ggml, then use the `ggml_added_tokens.json` and rename it to `added_tokens.json`, replacing the original.
|
36 |
+
|
37 |
+
<!--**Support My Development of New Models**
|
38 |
+
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
|
39 |
+
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
|