Highlighted work
Collection
My "greatest hits", sort of
•
8 items
•
Updated
•
4
This is a merge of pre-trained language models created using mergekit.
This is an experiment to see what happens when two o1-inspired models are merged. The result achieves an unexpectedly high MATH Lvl 5 benchmark of 33.99%.
Built with Llama.
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Skywork/Skywork-o1-Open-Llama-3.1-8B
- model: FreedomIntelligence/HuatuoGPT-o1-8B
merge_method: slerp
base_model: Skywork/Skywork-o1-Open-Llama-3.1-8B
parameters:
t:
- value: 0.5
dtype: bfloat16
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 23.67 |
IFEval (0-Shot) | 39.61 |
BBH (3-Shot) | 28.33 |
MATH Lvl 5 (4-Shot) | 33.99 |
GPQA (0-shot) | 5.70 |
MuSR (0-shot) | 11.12 |
MMLU-PRO (5-shot) | 23.28 |