Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ pipeline_tag: text-generation
|
|
15 |
|
16 |
|
17 |
<p align="center">
|
18 |
-
π€ <a href="https://huggingface.co/datasets/THUDM/LongReward-10k" target="_blank">[LongReward Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongReward" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/abs/" target="_blank">[LongReward Paper]</a>
|
19 |
</p>
|
20 |
|
21 |
LongReward-llama3.1-8b-DPO is the DPO version of [LongReward-llama3.1-8b-SFT](https://huggingface.co/THUDM/LongReward-llama3.1-8b-SFT) and supports a maximum context window of up to 64K tokens. It is trained on the `dpo_llama3.1_8b` split of [LongReward-10k](https://huggingface.co/datasets/THUDM/LongReward-45) datasets, which is a long-context preference dataset constructed via LongReward.
|
@@ -85,7 +85,7 @@ If you find our work useful, please consider citing LongReward:
|
|
85 |
title = {LongReward: Improving Long-context Large Language Models
|
86 |
with AI Feedback}
|
87 |
author={Jiajie Zhang and Zhongni Hou and Xin Lv and Shulin Cao and Zhenyu Hou and Yilin Niu and Lei Hou and Lei Hou and Yuxiao Dong and Ling Feng and Juanzi Li},
|
88 |
-
journal={arXiv preprint arXiv:},
|
89 |
year={2024}
|
90 |
}
|
91 |
```
|
|
|
15 |
|
16 |
|
17 |
<p align="center">
|
18 |
+
π€ <a href="https://huggingface.co/datasets/THUDM/LongReward-10k" target="_blank">[LongReward Dataset] </a> β’ π» <a href="https://github.com/THUDM/LongReward" target="_blank">[Github Repo]</a> β’ π <a href="https://arxiv.org/abs/2410.21252" target="_blank">[LongReward Paper]</a>
|
19 |
</p>
|
20 |
|
21 |
LongReward-llama3.1-8b-DPO is the DPO version of [LongReward-llama3.1-8b-SFT](https://huggingface.co/THUDM/LongReward-llama3.1-8b-SFT) and supports a maximum context window of up to 64K tokens. It is trained on the `dpo_llama3.1_8b` split of [LongReward-10k](https://huggingface.co/datasets/THUDM/LongReward-45) datasets, which is a long-context preference dataset constructed via LongReward.
|
|
|
85 |
title = {LongReward: Improving Long-context Large Language Models
|
86 |
with AI Feedback}
|
87 |
author={Jiajie Zhang and Zhongni Hou and Xin Lv and Shulin Cao and Zhenyu Hou and Yilin Niu and Lei Hou and Lei Hou and Yuxiao Dong and Ling Feng and Juanzi Li},
|
88 |
+
journal={arXiv preprint arXiv:2410.21252},
|
89 |
year={2024}
|
90 |
}
|
91 |
```
|