--- library_name: transformers tags: - mergekit - merge base_model: - ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix - ZeroXClem/Qwen2.5-7B-HomerCreative-Mix model-index: - name: HomerCreativeAnvita-Mix-Qw7B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 78.08 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 36.98 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 31.04 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.61 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 14.73 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 38.28 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=suayptalha/HomerCreativeAnvita-Mix-Qw7B name: Open LLM Leaderboard --- # Merged Model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ![HomerCreativeAnvita-Logo](HomerCreativeAnvita.jpeg) This model is currently ranked #1 on the Open LLM Leaderboard among models up to 13B parameters! ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix) * [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix dtype: bfloat16 merge_method: slerp parameters: t: - filter: self_attn value: [0.0, 0.5, 0.3, 0.7, 1.0] - filter: mlp value: [1.0, 0.5, 0.7, 0.3, 0.0] - value: 0.5 slices: - sources: - layer_range: [0, 28] model: ZeroXClem/Qwen2.5-7B-HomerCreative-Mix - layer_range: [0, 28] model: ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_suayptalha__HomerCreativeAnvita-Mix-Qw7B) | Metric |Value| |-------------------|----:| |Avg. |34.62| |IFEval (0-Shot) |78.08| |BBH (3-Shot) |36.98| |MATH Lvl 5 (4-Shot)|31.04| |GPQA (0-shot) | 8.61| |MuSR (0-shot) |14.73| |MMLU-PRO (5-shot) |38.28|