Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
shenxq
/
zephyr-7b-dpo-qlora
like
0
PEFT
TensorBoard
Safetensors
snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
mistral
alignment-handbook
Generated from Trainer
trl
dpo
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
3ba2dc4
zephyr-7b-dpo-qlora
/
README.md
Commit History
End of training
de0c06e
verified
shenxq
commited on
Mar 17
Model save
29be7a4
verified
shenxq
commited on
Mar 17
End of training
d81199e
verified
shenxq
commited on
Mar 17
Model save
ac637be
verified
shenxq
commited on
Mar 17