metadata
library_name: transformers
license: llama3.1
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- jondurbin/truthy-dpo-v0.1
- kyujinpy/orca_math_dpo
- antiven0m/physical-reasoning-dpo
base_model:
- mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
Llama3.1-Allades-8B
Allades finetunes abliterated Llama 3.1 with 5 datasets to improve creative writing, reasoning, and roleplay.
Datasets
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- jondurbin/truthy-dpo-v0.1
- kyujinpy/orca_math_dpo
- antiven0m/physical-reasoning-dpo
Training
ORPO tuned for 1 epoch with 2x RTX 3090 (sponsored by Schneewolf Labs).
Data was prepared with Llama 3.1 Instruct.