Update README.md
Browse files
README.md
CHANGED
@@ -1,22 +1,41 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
-
license:
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
- unsloth
|
9 |
- mistral
|
10 |
-
-
|
11 |
-
base_model: maldv/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
-
#
|
15 |
|
16 |
- **Developed by:** maldv
|
17 |
-
- **License:**
|
18 |
-
- **Finetuned from model
|
|
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
license: cc-by-nc-4.0
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
- unsloth
|
9 |
- mistral
|
10 |
+
- gguf
|
11 |
+
base_model: maldv/winter-garden-7b-alpha
|
12 |
+
datasets:
|
13 |
+
- maldv/cyberpunk
|
14 |
+
- microsoft/orca-math-word-problems-200k
|
15 |
+
- Weyaxi/sci-datasets
|
16 |
+
- grimulkan/theory-of-mind
|
17 |
+
- ResplendentAI/Synthetic_Soul_1k
|
18 |
+
- GraphWiz/GraphInstruct-RFT-72K
|
19 |
---
|
20 |
|
21 |
+
# Electric Mist 7B
|
22 |
|
23 |
- **Developed by:** maldv
|
24 |
+
- **License:** cc-by-nc-4.0
|
25 |
+
- **Finetuned from model:** alpindale/Mistral-7B-v0.2-hf
|
26 |
+
- **Methodology:** Simple newline delimited, rolling window book and stripped conversation data.
|
27 |
|
28 |
+
## Will It Write
|
29 |
|
30 |
+
It's good. It goes page after page. It needs an authors note to stay on track though.
|
31 |
+
|
32 |
+
## Data
|
33 |
+
|
34 |
+
90% book data (with a lot of the pulpiest removed), then 10% of the other datasets mixed together, lora r 64, lr .00005, 2 epochs.
|
35 |
+
|
36 |
+
## Chat Template
|
37 |
+
|
38 |
+
It was trained to follow no prompt at all, just to start going. There is explicitly no chat in the training data. Simply double newline delimited (even with the orca, math, etc)
|
39 |
+
|
40 |
+
|
41 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="50"/>](https://github.com/unslothai/unsloth)
|