Update README.md
Browse files
README.md
CHANGED
@@ -1,28 +1,24 @@
|
|
1 |
---
|
2 |
-
base_model: HuggingFaceH4/mistral-7b-ift
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
model-index:
|
6 |
-
- name:
|
7 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
11 |
should probably proofread and complete it, then remove this comment. -->
|
12 |
|
13 |
-
#
|
|
|
|
|
14 |
|
15 |
-
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-ift](https://huggingface.co/HuggingFaceH4/mistral-7b-ift) on the HuggingFaceH4/ultrafeedback dataset.
|
16 |
-
It achieves the following results on the evaluation set:
|
17 |
-
- Loss: 0.4605
|
18 |
-
- Rewards/chosen: -0.5053
|
19 |
-
- Rewards/rejected: -1.8752
|
20 |
-
- Rewards/accuracies: 0.7812
|
21 |
-
- Rewards/margins: 1.3699
|
22 |
-
- Logps/rejected: -327.4286
|
23 |
-
- Logps/chosen: -297.1040
|
24 |
-
- Logits/rejected: -2.7153
|
25 |
-
- Logits/chosen: -2.7447
|
26 |
|
27 |
## Model description
|
28 |
|
@@ -30,11 +26,44 @@ More information needed
|
|
30 |
|
31 |
## Intended uses & limitations
|
32 |
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Training and evaluation data
|
36 |
|
37 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
## Training procedure
|
40 |
|
@@ -84,4 +113,4 @@ The following hyperparameters were used during training:
|
|
84 |
- Transformers 4.34.0
|
85 |
- Pytorch 2.0.1+cu118
|
86 |
- Datasets 2.12.0
|
87 |
-
- Tokenizers 0.14.0
|
|
|
1 |
---
|
|
|
2 |
tags:
|
3 |
- generated_from_trainer
|
4 |
model-index:
|
5 |
+
- name: zephyr-7b-alpha
|
6 |
results: []
|
7 |
+
license: cc-by-nc-4.0
|
8 |
+
datasets:
|
9 |
+
- stingning/ultrachat
|
10 |
+
- openbmb/UltraFeedback
|
11 |
+
language:
|
12 |
+
- en
|
13 |
---
|
14 |
|
15 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
16 |
should probably proofread and complete it, then remove this comment. -->
|
17 |
|
18 |
+
# Zephyr 7B Alpha
|
19 |
+
|
20 |
+
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-α is the first model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes.
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## Model description
|
24 |
|
|
|
26 |
|
27 |
## Intended uses & limitations
|
28 |
|
29 |
+
The model was initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4. As a result, the model can be used for chat and you can check out our [demo](https://huggingface.co/spaces/HuggingFaceH4/zephyr-playground) to test its capabilities.
|
30 |
+
|
31 |
+
Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
|
32 |
+
|
33 |
+
```python
|
34 |
+
import torch
|
35 |
+
from transformers import pipeline
|
36 |
+
|
37 |
+
pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha", torch_dtype=torch.bfloat16, device_map="auto")
|
38 |
+
|
39 |
+
# We use a variant of ChatML to format each message
|
40 |
+
prompt_template = "<|system|>\n</s>\n<|user|>\n{query}</s>\n<|assistant|>\n"
|
41 |
+
prompt = prompt_template.format(query="How many helicopters can a human eat in one sitting?")
|
42 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
43 |
+
# Zero. Humans cannot consume or digest solid objects like helicopters, including their components such as rotor blades and engines. A human's diet is limited to food that they can swallow and break down through the process of digestion. Eating a helicopter would be physically impossible and could potentially cause serious harm if attempted.
|
44 |
+
```
|
45 |
+
|
46 |
+
## Bias, Risks, and Limitations
|
47 |
+
|
48 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
49 |
+
|
50 |
+
Zephyr-7B-α has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
|
51 |
+
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
|
52 |
+
|
53 |
|
54 |
## Training and evaluation data
|
55 |
|
56 |
+
Zephyr 7B Alpha achieves the following results on the evaluation set:
|
57 |
+
|
58 |
+
- Loss: 0.4605
|
59 |
+
- Rewards/chosen: -0.5053
|
60 |
+
- Rewards/rejected: -1.8752
|
61 |
+
- Rewards/accuracies: 0.7812
|
62 |
+
- Rewards/margins: 1.3699
|
63 |
+
- Logps/rejected: -327.4286
|
64 |
+
- Logps/chosen: -297.1040
|
65 |
+
- Logits/rejected: -2.7153
|
66 |
+
- Logits/chosen: -2.7447
|
67 |
|
68 |
## Training procedure
|
69 |
|
|
|
113 |
- Transformers 4.34.0
|
114 |
- Pytorch 2.0.1+cu118
|
115 |
- Datasets 2.12.0
|
116 |
+
- Tokenizers 0.14.0
|