YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B - GGUF
- Model creator: https://huggingface.co/yunconglong/
- Original model: https://huggingface.co/yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B/
Original model description:
license: mit tags: - moe - DPO - RL-TUNED
- DPO Trainer with dataset jondurbin/truthy-dpo-v0.1 to improve [TomGrc/FusionNet_7Bx2_MoE_14B]
DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
- Downloads last month
- 29