Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,15 @@ tags:
|
|
12 |
- nothingiisreal/MN-12B-Celeste-V1.9
|
13 |
---
|
14 |
|
15 |
-
|
16 |
-
|
17 |
-
|
|
|
18 |
* [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2)
|
19 |
* [Sao10K/MN-12B-Lyra-v1](https://huggingface.co/Sao10K/MN-12B-Lyra-v1)
|
20 |
* [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9)
|
21 |
|
22 |
-
##
|
23 |
|
24 |
```yaml
|
25 |
models:
|
@@ -43,7 +44,7 @@ parameters:
|
|
43 |
dtype: bfloat16
|
44 |
```
|
45 |
|
46 |
-
##
|
47 |
|
48 |
```python
|
49 |
!pip install -qU transformers accelerate
|
@@ -52,8 +53,8 @@ from transformers import AutoTokenizer
|
|
52 |
import transformers
|
53 |
import torch
|
54 |
|
55 |
-
model = "GalrionSoftworks/Pleiades-12B-
|
56 |
-
messages = [{"role": "user", "content": "
|
57 |
|
58 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
59 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
@@ -64,6 +65,22 @@ pipeline = transformers.pipeline(
|
|
64 |
device_map="auto",
|
65 |
)
|
66 |
|
67 |
-
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.
|
68 |
print(outputs[0]["generated_text"])
|
69 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
- nothingiisreal/MN-12B-Celeste-V1.9
|
13 |
---
|
14 |
|
15 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66b564058d9afb7a9d5607d5/1-LcsrW7AxytF-2GuouSG.png)
|
16 |
+
# Pleiades-12B-v1
|
17 |
+
Three rocks, one blender.
|
18 |
+
Pleiades-12B-v1 is a merge of the following models:
|
19 |
* [anthracite-org/magnum-12b-v2](https://huggingface.co/anthracite-org/magnum-12b-v2)
|
20 |
* [Sao10K/MN-12B-Lyra-v1](https://huggingface.co/Sao10K/MN-12B-Lyra-v1)
|
21 |
* [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9)
|
22 |
|
23 |
+
## Configuration
|
24 |
|
25 |
```yaml
|
26 |
models:
|
|
|
44 |
dtype: bfloat16
|
45 |
```
|
46 |
|
47 |
+
## Usage
|
48 |
|
49 |
```python
|
50 |
!pip install -qU transformers accelerate
|
|
|
53 |
import transformers
|
54 |
import torch
|
55 |
|
56 |
+
model = "GalrionSoftworks/Pleiades-12B-v1"
|
57 |
+
messages = [{"role": "user", "content": "Who is Alan Turing?"}]
|
58 |
|
59 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
60 |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|
|
65 |
device_map="auto",
|
66 |
)
|
67 |
|
68 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.8, top_k=0, top_p=0.90, min_p=0.05)
|
69 |
print(outputs[0]["generated_text"])
|
70 |
+
```
|
71 |
+
|
72 |
+
## Instruction Template
|
73 |
+
|
74 |
+
ChatML or... maybe mistral instruct?
|
75 |
+
|
76 |
+
ChatML
|
77 |
+
```md
|
78 |
+
"""<|im_start|>user
|
79 |
+
Hi there!<|im_end|>
|
80 |
+
<|im_start|>assistant
|
81 |
+
Nice to meet you!<|im_end|>
|
82 |
+
<|im_start|>user
|
83 |
+
Can I ask a question?<|im_end|>
|
84 |
+
<|im_start|>assistant
|
85 |
+
"""
|
86 |
+
```
|