tlphams's picture
Update README.md
9aa112f verified
|
raw
history blame
1.52 kB
metadata
base_model:
  - alpindale/WizardLM-2-8x22B
  - HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
library_name: transformers
tags:
  - mergekit
  - merge
license: cc-by-nc-sa-4.0

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Models Merged

The following models were included in the merge:

Benchmark results

1. MT-Bench from lmsys

We adapted the code from FastChat to benchmark our model with GPT-4 as a judge. Here is the result

|       | Model                    | Turn | Score    |
|-------|--------------------------|------|----------|
| First | tlphams/Wizard-Zephyr-Orpo-8x22B      | 1    | 9.1625   |
|       | mistralai/Mixtral-8x22B-Instruct-v0.1   | 1    | 9.1500   |
| Second| tlphams/Wizard-Zephyr-Orpo-8x22B      | 2    | 8.873418 |
|       | mistralai/Mixtral-8x22B-Instruct-v0.1   | 2    | 8.250000 |
| Average| tlphams/Wizard-Zephyr-Orpo-8x22B     |      | 9.018868 |
|        | mistralai/Mixtral-8x22B-Instruct-v0.1  |      | 8.700000 |

The score is slightly lower than alpindale/WizardLM-2-8x22B, but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^