Text Classification
Transformers
Safetensors
llama
text-generation-inference
Inference Endpoints
File size: 2,519 Bytes
8efa05a
 
 
1b2d0b0
 
dd3014e
1b2d0b0
dd3014e
 
1b2d0b0
dd3014e
 
 
1b2d0b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd3014e
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79

This reward function can be used for RLHF, including PPO, iterative SFT, iterative DPO.

## Training
The base model is meta-llama/Meta-Llama-3-8B-Instruct.

We use the training script at `https://github.com/WeiXiongUST/RLHF-Reward-Modeling`.


We train the model for one epoch with a learning rate of 2e-6, batch size 512, cosine learning rate decay with a warmup ratio 0.03.

## Uses

```python
  from transformers import AutoTokenizer, pipeline
  rm_tokenizer = AutoTokenizer.from_pretrained("sfairXC/FsfairX-LLaMA3-RM-v0.1")
  device = 0 # accelerator.device
  rm_pipe = pipeline(
      "sentiment-analysis",
      model="sfairXC/FsfairX-LLaMA3-RM-v0.1",
      #device="auto",
      device=device,
      tokenizer=rm_tokenizer,
      model_kwargs={"torch_dtype": torch.bfloat16}
  )

  pipe_kwargs = {
      "return_all_scores": True,
      "function_to_apply": "none",
      "batch_size": 1
  }

  chat = [
   {"role": "user", "content": "Hello, how are you?"},
   {"role": "assistant", "content": "I'm doing great. How can I help you today?"},
   {"role": "user", "content": "I'd like to show off how chat templating works!"},
  ]

  test_texts = [tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(tokenizer.bos_token, "")]
  pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
  rewards = [output[0]["score"] for output in pipe_outputs]
```


## Results


This Reward model is the SOTA open-source RM (Apr 20, 2024).

| Metric       | Score  |
|--------------|--------|
| Chat         | 99.44  |
| Chat Hard    | 65.13  |
| Safety       | 88.76  |
| Reasoning    | 88.3   |


## Reference
The repo was part of the iterative rejection sampling fine-tuning and iterative DPO. If you find the content of this repo useful in your work, please consider cite it as follows:

```bibtex
@article{dong2023raft,
  title={Raft: Reward ranked finetuning for generative foundation model alignment},
  author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},
  journal={arXiv preprint arXiv:2304.06767},
  year={2023}
}

@misc{xiong2024iterative,
      title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint}, 
      author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
      year={2024},
      eprint={2312.11456},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```