Full weight models
Collection
Select models uploaded in safetensors format. Currently all are merges. Annotations here.
β’
38 items
β’
Updated
β’
2
This is a merge of pre-trained language models created using mergekit.
Built with Meta Llama 3.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
layer_range:
- 0
- 32
- model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
layer_range:
- 0
- 32
merge_method: slerp
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.5
dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 20.74 |
IFEval (0-Shot) | 42.71 |
BBH (3-Shot) | 28.26 |
MATH Lvl 5 (4-Shot) | 9.37 |
GPQA (0-shot) | 5.37 |
MuSR (0-shot) | 9.54 |
MMLU-PRO (5-shot) | 29.17 |