Edit model card

Barcenas Llama3 8b ORPO

Model trained with the novel new ORPO method, based on the recent Llama 3 8b, specifically: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct

The model was trained with the dataset: reciperesearch/dolphin-sft-v0.1-preference which uses Dolphin data with GPT 4 to improve its conversation sections.

Made with ❀️ in Guadalupe, Nuevo Leon, Mexico πŸ‡²πŸ‡½

Downloads last month
16,400
Safetensors
Model size
8.03B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Danielbrdz/Barcenas-Llama3-8b-ORPO

Merges
20 models
Quantizations
3 models

Spaces using Danielbrdz/Barcenas-Llama3-8b-ORPO 7