Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,9 @@ The Generalizable Reward Model (GRM) aims to enhance the generalization ability
|
|
10 |
|
11 |
Paper: [Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs](https://arxiv.org/abs/2406.10216).
|
12 |
|
13 |
-
|
|
|
|
|
14 |
|
15 |
This reward model is finetuned from [llama3_8b_instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using the [hendrydong/preference_700K](https://huggingface.co/datasets/hendrydong/preference_700K) dataset.
|
16 |
|
|
|
10 |
|
11 |
Paper: [Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs](https://arxiv.org/abs/2406.10216).
|
12 |
|
13 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64d45451c34a346181b130dd/ieB57iMlZuK8zyTadW2M-.png)
|
14 |
+
|
15 |
+
The framework is shown above. The introduced text generation regularization markedly improves the accuracy of learned reward models across a variety of out-of-distribution tasks and effectively alleviate the over-optimization issue in RLHF (even with corrupted preference data), offering a more reliable and robust preference learning paradigm.
|
16 |
|
17 |
This reward model is finetuned from [llama3_8b_instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using the [hendrydong/preference_700K](https://huggingface.co/datasets/hendrydong/preference_700K) dataset.
|
18 |
|