leaderboard-pr-bot's picture
Adding Evaluation Results
a13c8e2 verified
|
raw
history blame
4.32 kB
metadata
language:
  - en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
model-index:
  - name: HermesStar-OrcaWind-Synth-11B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 65.27
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 83.69
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 65.31
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 48.55
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 80.11
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 56.63
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Ba2han/HermesStar-OrcaWind-Synth-11B
          name: Open LLM Leaderboard

Open Hermes + Starling passthrough merged

SlimOrca(?)+Zephyr Beta linear merged, then passthrough merged with Synthia

Then both models were merged again in 1 to 0.3 ratio.

My findings:

Increasing repetition penalty usually makes the model smarter up to a degree but it also causes stability issues.

Since most of the merged models were trained with ChatML, use ChatML template. Rarely the model throws another EOS token though.

  • My favorite preset has been uploaded.
  • You can use some sort of CoT prompt instead of "system" in ChatML. It does improve the quality of most output. (You are an assistant. Break down the question and come to a conclusion.)

I don't know what I am doing, you are very welcome to put the model through benchmarks.

I'll also upload q6 GGUF but my internet is shit, so don't hesitate to share other quantizations.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.59
AI2 Reasoning Challenge (25-Shot) 65.27
HellaSwag (10-Shot) 83.69
MMLU (5-Shot) 65.31
TruthfulQA (0-shot) 48.55
Winogrande (5-shot) 80.11
GSM8k (5-shot) 56.63