Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.

Downloads last month
167
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for princeton-nlp/Mistral-7B-Base-SFT-SimPO

Quantizations
1 model

Spaces using princeton-nlp/Mistral-7B-Base-SFT-SimPO 4

Collection including princeton-nlp/Mistral-7B-Base-SFT-SimPO