Working Merge in my Profile
Collection
20 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the della_linear merge method using unsloth/Meta-Llama-3.1-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: della_linear
dtype: bfloat16
parameters:
epsilon: 0.1
lambda: 1.0
int8_mask: true
normalize: true
base_model: unsloth/Meta-Llama-3.1-8B
models:
- model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1
density: 0.5
- model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
parameters:
weight: 1
density: 0.45
- model: djuna/L3.1-Suze-Vume-2-calc
parameters:
weight: 1
density: 0.45
- model: THUDM/LongWriter-llama3.1-8b+ResplendentAI/Smarts_Llama3
parameters:
weight: 1
density: 0.55
- model: djuna/L3.1-ForStHS+Blackroot/Llama-3-8B-Abomination-LORA
parameters:
weight: 1
density: 0.5
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.85 |
IFEval (0-Shot) | 49.88 |
BBH (3-Shot) | 31.39 |
MATH Lvl 5 (4-Shot) | 10.12 |
GPQA (0-shot) | 6.82 |
MuSR (0-shot) | 8.30 |
MMLU-PRO (5-shot) | 30.57 |