Edit model card

We propose a novel strategy to enhance off-policy preference optimization by simulating on-policy learning with off-policy preference data. Our Weighted Preference Optimization (WPO) method adapts off-policy data to resemble on-policy data more closely by reweighting preference pairs according to their probability under the current policy. This method not only addresses the distributional gap problem but also enhances the optimization process without incurring additional costs. Refer to our preprint and repo for details.

Model Description

Data

Llama3-Instruct-8B model finetuned by hybrid WPO, utilizing three types of data:

  1. Ultrafeedback dataset.
  2. On-policy sampled Llama outputs based on Ultrafeedback prompts.
  3. GPT-4-turbo outputs based on Ultrafeedback prompts.

In comparison to the preference data construction method in our paper, it employs a method:

  1. Uses the response with the minimum score as the rejected one.
  2. When multiple outputs have the same highest score, the one with the shortest length is selected.
  3. When multiple outputs have the same minimum score, the one with the smallest length difference from the chosen output is selected.

The model is trained based on wzhouad/llama3-ultrafeedback-hybrid-v2.

AlpacaEval Eval Results

Model LC WR Avg. Length
Llama3-Instruct-8B-WPO-HB-v2 53.4 57.3 2472

Link to Other WPO Models

Check our WPO Collection.

Training Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • beta: 0.01
  • per_device_train_batch_size: 2
  • gradient_accumulation_steps: 8
  • seed: 1
  • num_devices: 8
  • optim: adamw_torch
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_train_epochs: 2.0
  • max_length: 2048
  • max_prompt_length: 1800

License

This model is licensed under the Zoom software license and is permitted for use only for noncommercial, educational, or academic research purposes.

Citation

WPO:

@article{zhou2024wpo,
  title={WPO: Enhancing RLHF with Weighted Preference Optimization},
  author={Zhou, Wenxuan and Agrawal, Ravi and Zhang, Shujian and Indurthi, Sathish Reddy and Zhao, Sanqiang and Song, Kaiqiang and Xu, Silei and Zhu, Chenguang},
  journal={arXiv preprint arXiv:2406.11827},
  year={2024}
}

Ultrafeedback:

@article{cui2023ultrafeedback,
  title={{UltraFeedback}: Boosting language models with high-quality feedback},
  author={Cui, Ganqu and Yuan, Lifan and Ding, Ning and Yao, Guanming and Zhu, Wei and Ni, Yuan and Xie, Guotong and Liu, Zhiyuan and Sun, Maosong},
  journal={arXiv preprint arXiv:2310.01377},
  year={2023}
}
Downloads last month
19
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for wzhouad/Llama3-Instruct-8B-WPO-HB-v2

Finetuned
this model

Dataset used to train wzhouad/Llama3-Instruct-8B-WPO-HB-v2

Collection including wzhouad/Llama3-Instruct-8B-WPO-HB-v2