Edit model card

Model Merge Parameters

Base model: meta-llama/Meta-Llama-3-8B-Instruct Models: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct DeepMount00/Llama-3-8b-Ita nbeerbower/llama-3-gutenberg-8B jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 meta-llama/Meta-Llama-3-8B-Instruct Merge method: breadcrumbs Random seed: 42 density: 0.5 gamma: 0.01 normalize: true weight: 1.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 19.38
IFEval (0-Shot) 32.04
BBH (3-Shot) 27.67
MATH Lvl 5 (4-Shot) 0.00
GPQA (0-shot) 6.94
MuSR (0-shot) 23.62
MMLU-PRO (5-shot) 26.05
Downloads last month
6
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for johnsutor/Llama-3-8B-Instruct_breadcrumbs-density-0.5-gamma-0.01

Evaluation results