metadata
license: apache-2.0
language:
- en
zephyr_0.1
The DPO-trained model from alignment-handbook/zephyr-7b-sft-full
using 10% data of HuggingFaceH4/ultrafeedback_binarized
, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.