alpindale commited on
Commit
8c155e1
1 Parent(s): 2b13748

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2-72B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ - zh
8
+ pipeline_tag: text-generation
9
+ tags:
10
+ - chat
11
+ ---
12
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/635567189c72a7e742f1419c/PK7xRSd18Du0bX-w_t-9c.png)
13
+ ## This repo contains GGUF quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-32b-v1).
14
+ This is the second in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of [Qwen1.5 32B](https://huggingface.co/Qwen/Qwen1.5-32B).
15
+
16
+
17
+ ## Prompting
18
+ Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:
19
+
20
+ ```py
21
+ """<|im_start|>user
22
+ Hi there!<|im_end|>
23
+ <|im_start|>assistant
24
+ Nice to meet you!<|im_end|>
25
+ <|im_start|>user
26
+ Can I ask a question?<|im_end|>
27
+ <|im_start|>assistant
28
+ """
29
+ ```
30
+
31
+ ## Credits
32
+
33
+ Three new general purpose instruction following datasets were added on top of the original Stheno dataset (which had certain low quality entries purged/removed).
34
+ The first two were designed specifically for the Magnum series, to better address prompt adherence and coherence:
35
+ - [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k)
36
+ - [Nopm/Opus_WritingStruct](https://huggingface.co/datasets/Nopm/Opus_WritingStruct)
37
+ - [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned) (A ~16k rows subset)
38
+
39
+ This model has been a team effort, and the credits goes to all members of Anthracite.
40
+
41
+ ## Training
42
+ The training was done for 2 epochs with a learning rate of 1e-05. We used 8x [NVIDIA H100 Tensor Core](https://www.nvidia.com/en-us/data-center/h100/) GPUs for the full-parameter fine-tuning of the model.
43
+
44
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
45
+
46
+ ## Safety
47
+ ...