add measurement.json
Browse files- README.md +119 -0
- measurement.json +0 -0
README.md
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-nd-3.0
|
3 |
+
---
|
4 |
+
# LLaMA3-iterative-DPO-final
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
We release an unofficial checkpoint of a state-of-the-art instruct model of its class, **LLaMA3-iterative-DPO-final**.
|
8 |
+
On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
|
9 |
+
and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
|
10 |
+
|
11 |
+
Even better, we provide a [detailed recipe](https://github.com/RLHFlow/Online-RLHF) to reproduce the model. Enjoy!
|
12 |
+
|
13 |
+
## Model Releases
|
14 |
+
See the [collection](https://huggingface.co/collections/RLHFlow/online-rlhf-663ae95fade1a39663dab218) of the training set, reward/preference model, SFT model.
|
15 |
+
|
16 |
+
- [SFT model](https://huggingface.co/RLHFlow/LLaMA3-SFT)
|
17 |
+
- [Reward model](https://huggingface.co/sfairXC/FsfairX-LLaMA3-RM-v0.1)
|
18 |
+
|
19 |
+
## Dataset
|
20 |
+
- [Preference data mix](https://huggingface.co/datasets/hendrydong/preference_700K)
|
21 |
+
- [Prompt collection for RLHF training](https://huggingface.co/datasets/RLHFlow/prompt-collection-v0.1)
|
22 |
+
|
23 |
+
## Training methods
|
24 |
+
We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
|
25 |
+
Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
|
26 |
+
For a detailed exposition, please refer to our accompanying technical report.
|
27 |
+
|
28 |
+
|
29 |
+
## Chat Benchmarks
|
30 |
+
|
31 |
+
| **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
|
32 |
+
|-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
|
33 |
+
| **Small Open-Sourced Models** | | | | | |
|
34 |
+
| Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
|
35 |
+
| Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
|
36 |
+
| Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
|
37 |
+
| Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
|
38 |
+
| Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
|
39 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
|
40 |
+
| **Ours** | | | | | |
|
41 |
+
| Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
|
42 |
+
| Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
|
43 |
+
| Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
|
44 |
+
| **Large Open-Sourced Models** | | | | | |
|
45 |
+
| Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
|
46 |
+
| Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
|
47 |
+
| Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
|
48 |
+
| Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
|
49 |
+
| LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
|
50 |
+
| Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
|
51 |
+
| **Proprietary Models** | | | | | |
|
52 |
+
| GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
|
53 |
+
| GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
|
54 |
+
| GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
|
55 |
+
| Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
|
56 |
+
| GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
|
57 |
+
|
58 |
+
|
59 |
+
## Academic Benchmarks
|
60 |
+
|
61 |
+
| **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
|
62 |
+
|----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
|
63 |
+
| LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
|
64 |
+
| Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
|
65 |
+
| Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
|
66 |
+
| Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
|
67 |
+
|
68 |
+
|
69 |
+
## Usage
|
70 |
+
```python
|
71 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
72 |
+
|
73 |
+
device = "cuda"
|
74 |
+
|
75 |
+
model = AutoModelForCausalLM.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
|
76 |
+
tokenizer = AutoTokenizer.from_pretrained("RLHFlow/LLaMA3-iterative-DPO-final")
|
77 |
+
|
78 |
+
messages = [
|
79 |
+
{"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
|
80 |
+
]
|
81 |
+
|
82 |
+
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
|
83 |
+
|
84 |
+
model_inputs = model_inputs.to(device)
|
85 |
+
model.to(device)
|
86 |
+
|
87 |
+
output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
|
88 |
+
model_outputs = tokenizer.batch_decode(output_tokens)
|
89 |
+
print(model_outputs[0])
|
90 |
+
```
|
91 |
+
|
92 |
+
|
93 |
+
## Limitations
|
94 |
+
RLHFlow/LLaMA3-iterative-DPO-final is an unofficial checkpoint developed to illustrate the power of online iterative RLHF and is for research purpose. While safety and ethical considerations are integral to our alignment process,
|
95 |
+
there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
|
96 |
+
We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
|
97 |
+
|
98 |
+
## Citation
|
99 |
+
Please cite our techical report if you find our model is useful for your research or product.
|
100 |
+
```
|
101 |
+
@misc{dong2024rlhf,
|
102 |
+
title={RLHF Workflow: From Reward Modeling to Online RLHF},
|
103 |
+
author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
|
104 |
+
year={2024},
|
105 |
+
eprint={2405.07863},
|
106 |
+
archivePrefix={arXiv},
|
107 |
+
primaryClass={cs.LG}
|
108 |
+
}
|
109 |
+
|
110 |
+
@misc{xiong2024iterative,
|
111 |
+
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
|
112 |
+
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
|
113 |
+
year={2024},
|
114 |
+
eprint={2312.11456},
|
115 |
+
archivePrefix={arXiv},
|
116 |
+
primaryClass={cs.LG}
|
117 |
+
}
|
118 |
+
|
119 |
+
```
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|