JW17 commited on
Commit
382418f
1 Parent(s): 0b45748

Add IFEval and Open-LLM-Leaderboard official results

Browse files
Files changed (1) hide show
  1. README.md +123 -3
README.md CHANGED
@@ -10,6 +10,117 @@ pipeline_tag: text-generation
10
  model-index:
11
  - name: Mistral-ORPO-β
12
  results:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - task:
14
  type: text-generation
15
  dataset:
@@ -51,7 +162,9 @@ model-index:
51
 
52
  **Mistral-ORPO** is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the *odds ratio preference optimization (ORPO)*. With ORPO, the model directly learns the preference without the supervised fine-tuning warmup phase. **Mistral-ORPO-β** is fine-tuned exclusively on the 61k instances of the cleaned version of UltraFeedback, [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned), by [Argilla](https://huggingface.co/argilla).
53
 
54
- ## Model Performance
 
 
55
 
56
  |Model Name|Size|Align|MT-Bench|AlpacaEval 1.0|AlpacaEval 2.0|
57
  |:--------|:--------------:|:--------------:|:-------------------:|:------------:|:------------:|
@@ -62,11 +175,18 @@ model-index:
62
  |Llama-2-Chat |7B|RLHF|6.27|71.37|4.96|
63
  |Llama-2-Chat |13B|RLHF|6.65|81.09|7.70|
64
 
65
- ## MT-Bench
 
 
 
 
 
 
 
66
 
67
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6415c043486c7c9a5d151583/1Ifpt0ljCfJPEoZAqlqqy.png)
68
 
69
- ## Inference
70
 
71
  ```python
72
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
10
  model-index:
11
  - name: Mistral-ORPO-β
12
  results:
13
+ # AI2 Reasoning Challenge (25-Shot)
14
+ - task:
15
+ type: text-generation
16
+ name: Text Generation
17
+ dataset:
18
+ name: AI2 Reasoning Challenge (25-Shot)
19
+ type: ai2_arc
20
+ config: ARC-Challenge
21
+ split: test
22
+ args:
23
+ num_few_shot: 25
24
+ metrics:
25
+ - type: acc_norm
26
+ name: normalized accuracy
27
+ value: 61.18
28
+ source:
29
+ name: Open LLM Leaderboard
30
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
31
+
32
+ # HellaSwag (10-shot)
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: HellaSwag (10-Shot)
38
+ type: hellaswag
39
+ split: validation
40
+ args:
41
+ num_few_shot: 10
42
+ metrics:
43
+ - type: acc_norm
44
+ name: normalized accuracy
45
+ value: 84.03
46
+ source:
47
+ name: Open LLM Leaderboard
48
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
49
+
50
+ # TruthfulQA (0-shot)
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: TruthfulQA (0-shot)
56
+ type: truthful_qa
57
+ config: multiple_choice
58
+ split: validation
59
+ args:
60
+ num_few_shot: 0
61
+ metrics:
62
+ - type: mc2
63
+ value: 47.69
64
+ source:
65
+ name: Open LLM Leaderboard
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
67
+
68
+ # GSM8k (5-shot)
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: GSM8k (5-shot)
74
+ type: gsm8k
75
+ config: main
76
+ split: test
77
+ args:
78
+ num_few_shot: 5
79
+ metrics:
80
+ - type: acc
81
+ name: accuracy
82
+ value: 39.8
83
+ source:
84
+ name: Open LLM Leaderboard
85
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
86
+
87
+ # MMLU (5-Shot)
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MMLU (5-Shot)
93
+ type: cais/mmlu
94
+ config: all
95
+ split: test
96
+ args:
97
+ num_few_shot: 5
98
+ metrics:
99
+ - type: acc
100
+ name: accuracy
101
+ value: 63.26
102
+ source:
103
+ name: Open LLM Leaderboard
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
105
+
106
+ # Winogrande (5-shot)
107
+ - task:
108
+ type: text-generation
109
+ name: Text Generation
110
+ dataset:
111
+ name: Winogrande (5-shot)
112
+ type: winogrande
113
+ config: winogrande_xl
114
+ split: validation
115
+ args:
116
+ num_few_shot: 5
117
+ metrics:
118
+ - type: acc
119
+ name: accuracy
120
+ value: 79.24
121
+ source:
122
+ name: Open LLM Leaderboard
123
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kaist-ai%2Fmistral-orpo-beta
124
  - task:
125
  type: text-generation
126
  dataset:
 
162
 
163
  **Mistral-ORPO** is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using the *odds ratio preference optimization (ORPO)*. With ORPO, the model directly learns the preference without the supervised fine-tuning warmup phase. **Mistral-ORPO-β** is fine-tuned exclusively on the 61k instances of the cleaned version of UltraFeedback, [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned), by [Argilla](https://huggingface.co/argilla).
164
 
165
+ ## 👍 **Model Performance**
166
+
167
+ ### 1) AlpacaEval & MT-Bench
168
 
169
  |Model Name|Size|Align|MT-Bench|AlpacaEval 1.0|AlpacaEval 2.0|
170
  |:--------|:--------------:|:--------------:|:-------------------:|:------------:|:------------:|
 
175
  |Llama-2-Chat |7B|RLHF|6.27|71.37|4.96|
176
  |Llama-2-Chat |13B|RLHF|6.65|81.09|7.70|
177
 
178
+ ### 2) IFEval
179
+
180
+ | **Model Type** | **Prompt-Strict** | **Prompt-Loose** | **Inst-Strict** | **Inst-Loose** |
181
+ |--------------------|:-----------------:|:----------------:|:---------------:|:--------------:|
182
+ | **Mistral-ORPO-⍺** | 0.5009 | 0.5083 | 0.5995 | 0.6163 |
183
+ | **Mistral-ORPO-β** | 0.5287 | 0.5564 | 0.6355 | 0.6619 |
184
+
185
+ ## 🗺️ **MT-Bench by Category**
186
 
187
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6415c043486c7c9a5d151583/1Ifpt0ljCfJPEoZAqlqqy.png)
188
 
189
+ ## 🖥️ **Inference**
190
 
191
  ```python
192
  from transformers import AutoModelForCausalLM, AutoTokenizer