Model Card
Model Description
This is a Large Language Model (LLM) trained on a subset of the dataset "mlabonne/orpo-dpo-mix-40k".
Evaluation Results
Hellaswag
Metric | Value |
---|---|
Accuracy | 0.4517 |
How to Use
To use this model, simply download the checkpoint and load it into your preferred deep learning framework.
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for d4niel92/llama-3.2-1B-orpo
Base model
meta-llama/Llama-3.2-1B