Various useful datasets with preference optimization
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
New activity
about 14 hours ago
nbeerbower/SmolNemo-12B-FFT-experimental:Adding Evaluation Results
liked
a dataset
about 19 hours ago
HuggingFaceTB/openstax_paragraphs
updated
a model
2 days ago
nbeerbower/Nemo-Loony-12B-experimental
Organizations
models
137
nbeerbower/SmolNemo-12B-FFT-experimental
Text Generation
•
Updated
•
16
nbeerbower/Nemo-Loony-12B-experimental
Text Generation
•
Updated
•
13
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
Text Generation
•
Updated
•
15
•
1
nbeerbower/Mistral-Gutenberg-Doppel-7B-FFT
Text Generation
•
Updated
•
51
•
1
nbeerbower/Qwen2.5-Gutenberg-Doppel-14B
Text Generation
•
Updated
•
147
•
11
nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
Text Generation
•
Updated
•
96
•
6
nbeerbower/Mistral-Nemo-Prism-12B
Text Generation
•
Updated
•
97
•
2
nbeerbower/Mistral-Nemo-Prism-12B-v2
Text Generation
•
Updated
•
47
•
2
nbeerbower/Mistral-Nemo-Prism-12B-v3
Text Generation
•
Updated
•
19
nbeerbower/Mistral-Nemo-Prism-12B-v4
Text Generation
•
Updated
•
23
datasets
7
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
30
•
1
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
111
•
14
nbeerbower/Schule-DPO
Viewer
•
Updated
•
34
•
25
nbeerbower/cover-images
Viewer
•
Updated
•
4
•
428
nbeerbower/Arkhaios-DPO
Viewer
•
Updated
•
222
•
78
•
6
nbeerbower/Purpura-DPO
Viewer
•
Updated
•
230
•
71
•
5
nbeerbower/bible-dpo
Viewer
•
Updated
•
31.1k
•
47