Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
nebchi
's Collections
Fine Tuning Model
DPO Dataset
DPO Dataset
updated
Aug 8
한국어 DPO 데이터셋 모음
Upvote
-
maywell/ko_Ultrafeedback_binarized
Viewer
•
Updated
Nov 9, 2023
•
62k
•
86
•
28
kuotient/orca-math-korean-dpo-pairs
Viewer
•
Updated
Apr 5
•
193k
•
117
•
9
zzunyang/dpo_data
Viewer
•
Updated
Jan 26
•
126
•
81
SJ-Donald/orca-dpo-pairs-ko
Viewer
•
Updated
Jan 24
•
36k
•
88
•
8
Upvote
-
Share collection
View history
Collection guide
Browse collections