Edit model card

We propose a novel strategy to enhance off-policy preference optimization by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. Refer to our preprint and repo for details.

Model Description

Data

gemma-2-9b-it finetuned by hybrid WPO, utilizing two types of data:

  1. On-policy sampled gemma outputs based on Ultrafeedback prompts.
  2. GPT-4-turbo outputs based on Ultrafeedback prompts.

In comparison to the preference data construction method in our paper, we switch to RLHFlow/ArmoRM-Llama3-8B-v0.1 to score the outputs, and choose the outputs with maximum/minimum scores to form a preference pair.

We provide our training data at wzhouad/gemma-2-ultrafeedback-hybrid.

AlpacaEval Eval Results

Model LC WR Avg. Length
gemma-2-9b-it-WPO-HB 76.73 77.83 2285

Link to Other WPO Models

Check our WPO Collection.

Training Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • beta: 0.01
  • per_device_train_batch_size: 1
  • gradient_accumulation_steps: 16
  • seed: 1
  • num_devices: 8
  • optim: adamw_torch
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_train_epochs: 2.0
  • max_length: 2048
  • max_prompt_length: 1800

License

This model is licensed under the Zoom software license and is permitted for use only for noncommercial, educational, or academic research purposes.

Citation

WPO:

@article{zhou2024wpo,
  title={WPO: Enhancing RLHF with Weighted Preference Optimization},
  author={Zhou, Wenxuan and Agrawal, Ravi and Zhang, Shujian and Indurthi, Sathish Reddy and Zhao, Sanqiang and Song, Kaiqiang and Xu, Silei and Zhu, Chenguang},
  journal={arXiv preprint arXiv:2406.11827},
  year={2024}
}

Ultrafeedback:

@article{cui2023ultrafeedback,
  title={{UltraFeedback}: Boosting language models with high-quality feedback},
  author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
  journal={arXiv preprint arXiv:2310.01377},
  year={2023}
}

Armo-RM:

@article{ArmoRM,
      title={Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts}, 
      author={Haoxiang Wang and Wei Xiong and Tengyang Xie and Han Zhao and Tong Zhang},
      journal={arXiv preprint arXiv:2406.12845},
}

@inproceedings{wang2024arithmetic,
      title={Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards}, 
      author={Haoxiang Wang and Yong Lin and Wei Xiong and Rui Yang and Shizhe Diao and Shuang Qiu and Han Zhao and Tong Zhang},
      year={2024},
      booktitle={ACL},
}
Downloads last month
267
Safetensors
Model size
9.24B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for wzhouad/gemma-2-9b-it-WPO-HB

Base model

google/gemma-2-9b
Finetuned
this model
Merges
8 models
Quantizations
5 models

Dataset used to train wzhouad/gemma-2-9b-it-WPO-HB

Collection including wzhouad/gemma-2-9b-it-WPO-HB