CultriX commited on
Commit
6a683ed
1 Parent(s): cb01dea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -1
README.md CHANGED
@@ -1,3 +1,74 @@
1
  # Note:
2
  This is a test to check if it fixed the INSTINSTINST error in the output!
3
- Please let me know if you still get errors using this model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Note:
2
  This is a test to check if it fixed the INSTINSTINST error in the output!
3
+ Please let me know if you still get errors using this model.
4
+
5
+ ---
6
+ tags:
7
+ - merge
8
+ - mergekit
9
+ - lazymergekit
10
+ - bardsai/jaskier-7b-dpo-v3.3
11
+ - CultriX/NeuralTrix-v4-bf16
12
+ - CultriX/NeuralTrix-7B-dpo
13
+ base_model:
14
+ - bardsai/jaskier-7b-dpo-v3.3
15
+ - CultriX/NeuralTrix-v4-bf16
16
+ - CultriX/NeuralTrix-7B-dpo
17
+ ---
18
+
19
+ # NeuralTrix-bf16
20
+
21
+ NeuralTrix-bf16 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
22
+ * [bardsai/jaskier-7b-dpo-v3.3](https://huggingface.co/bardsai/jaskier-7b-dpo-v3.3)
23
+ * [CultriX/NeuralTrix-v4-bf16](https://huggingface.co/CultriX/NeuralTrix-v4-bf16)
24
+ * [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
25
+
26
+ ## 🧩 Configuration
27
+
28
+ ```yaml
29
+ models:
30
+ - model: eren23/dpo-binarized-NeuralTrix-7B
31
+ # no parameters necessary for base model
32
+ - model: bardsai/jaskier-7b-dpo-v3.3
33
+ parameters:
34
+ density: 0.65
35
+ weight: 0.4
36
+ - model: CultriX/NeuralTrix-v4-bf16
37
+ parameters:
38
+ density: 0.6
39
+ weight: 0.35
40
+ - model: CultriX/NeuralTrix-7B-dpo
41
+ parameters:
42
+ density: 0.6
43
+ weight: 0.35
44
+ merge_method: dare_ties
45
+ base_model: eren23/dpo-binarized-NeuralTrix-7B
46
+ parameters:
47
+ int8_mask: true
48
+ dtype: bfloat16
49
+ ```
50
+
51
+ ## 💻 Usage
52
+
53
+ ```python
54
+ !pip install -qU transformers accelerate
55
+
56
+ from transformers import AutoTokenizer
57
+ import transformers
58
+ import torch
59
+
60
+ model = "CultriX/"
61
+ messages = [{"role": "user", "content": "What is a large language model?"}]
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained(model)
64
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
65
+ pipeline = transformers.pipeline(
66
+ "text-generation",
67
+ model=model,
68
+ torch_dtype=torch.float16,
69
+ device_map="auto",
70
+ )
71
+
72
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
73
+ print(outputs[0]["generated_text"])
74
+ ```