File size: 1,524 Bytes
9e20689
 
 
 
 
 
 
 
85a9f1d
9e20689
 
 
 
 
 
 
 
 
 
de9e756
 
 
 
ec0b388
de9e756
9aa112f
 
 
 
 
 
 
 
de9e756
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
base_model:
- alpindale/WizardLM-2-8x22B
- HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-sa-4.0
---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Models Merged

The following models were included in the merge:
* [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)
* [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1)

## Benchmark results
### 1. MT-Bench from lmsys
We adapted the code from [FastChat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) to benchmark our model with GPT-4 as a judge. Here is the result
```
|       | Model                    | Turn | Score    |
|-------|--------------------------|------|----------|
| First | tlphams/Wizard-Zephyr-Orpo-8x22B      | 1    | 9.1625   |
|       | mistralai/Mixtral-8x22B-Instruct-v0.1   | 1    | 9.1500   |
| Second| tlphams/Wizard-Zephyr-Orpo-8x22B      | 2    | 8.873418 |
|       | mistralai/Mixtral-8x22B-Instruct-v0.1   | 2    | 8.250000 |
| Average| tlphams/Wizard-Zephyr-Orpo-8x22B     |      | 9.018868 |
|        | mistralai/Mixtral-8x22B-Instruct-v0.1  |      | 8.700000 |
```
The score is slightly lower than [alpindale/WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B), but still higher than GPT-4-0314. Then the research and experimental work still need to continue ^^