Commit
469be02
1 Parent(s): 884d777

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (b764a9d7dce8b587af0c5acf01e18af359075d2b)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +124 -8
README.md CHANGED
@@ -1,19 +1,122 @@
1
  ---
 
 
 
 
2
  tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- base_model:
7
- - cognitivecomputations/dolphin-llama2-7b
8
- - Tensoic/Llama-2-openhermes
9
- license: llama2
10
  datasets:
11
  - teknium/openhermes
12
  - cognitivecomputations/dolphin
13
- language:
14
- - en
15
- library_name: transformers
16
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
 
19
  # OpenDolphinHermes_Llama2_7B
@@ -110,4 +213,17 @@ They have become increasingly popular in recent years due to advances in machine
110
  Examples of large language models include GPT-2, BERT, and T5.
111
  ```
112
  ## Thanks
113
- Thanks to Google Colab for the compute.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: llama2
5
+ library_name: transformers
6
  tags:
7
  - merge
8
  - mergekit
9
  - lazymergekit
 
 
 
 
10
  datasets:
11
  - teknium/openhermes
12
  - cognitivecomputations/dolphin
13
+ base_model:
14
+ - cognitivecomputations/dolphin-llama2-7b
15
+ - Tensoic/Llama-2-openhermes
16
  pipeline_tag: text-generation
17
+ model-index:
18
+ - name: OpenDolphinHermes_Llama2_7B
19
+ results:
20
+ - task:
21
+ type: text-generation
22
+ name: Text Generation
23
+ dataset:
24
+ name: AI2 Reasoning Challenge (25-Shot)
25
+ type: ai2_arc
26
+ config: ARC-Challenge
27
+ split: test
28
+ args:
29
+ num_few_shot: 25
30
+ metrics:
31
+ - type: acc_norm
32
+ value: 55.03
33
+ name: normalized accuracy
34
+ source:
35
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
36
+ name: Open LLM Leaderboard
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: HellaSwag (10-Shot)
42
+ type: hellaswag
43
+ split: validation
44
+ args:
45
+ num_few_shot: 10
46
+ metrics:
47
+ - type: acc_norm
48
+ value: 78.74
49
+ name: normalized accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
52
+ name: Open LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: MMLU (5-Shot)
58
+ type: cais/mmlu
59
+ config: all
60
+ split: test
61
+ args:
62
+ num_few_shot: 5
63
+ metrics:
64
+ - type: acc
65
+ value: 52.25
66
+ name: accuracy
67
+ source:
68
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: TruthfulQA (0-shot)
75
+ type: truthful_qa
76
+ config: multiple_choice
77
+ split: validation
78
+ args:
79
+ num_few_shot: 0
80
+ metrics:
81
+ - type: mc2
82
+ value: 46.1
83
+ source:
84
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: Winogrande (5-shot)
91
+ type: winogrande
92
+ config: winogrande_xl
93
+ split: validation
94
+ args:
95
+ num_few_shot: 5
96
+ metrics:
97
+ - type: acc
98
+ value: 73.16
99
+ name: accuracy
100
+ source:
101
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
102
+ name: Open LLM Leaderboard
103
+ - task:
104
+ type: text-generation
105
+ name: Text Generation
106
+ dataset:
107
+ name: GSM8k (5-shot)
108
+ type: gsm8k
109
+ config: main
110
+ split: test
111
+ args:
112
+ num_few_shot: 5
113
+ metrics:
114
+ - type: acc
115
+ value: 20.17
116
+ name: accuracy
117
+ source:
118
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/OpenDolphinHermes_Llama2_7B
119
+ name: Open LLM Leaderboard
120
  ---
121
 
122
  # OpenDolphinHermes_Llama2_7B
 
213
  Examples of large language models include GPT-2, BERT, and T5.
214
  ```
215
  ## Thanks
216
+ Thanks to Google Colab for the compute.
217
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
218
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__OpenDolphinHermes_Llama2_7B)
219
+
220
+ | Metric |Value|
221
+ |---------------------------------|----:|
222
+ |Avg. |54.24|
223
+ |AI2 Reasoning Challenge (25-Shot)|55.03|
224
+ |HellaSwag (10-Shot) |78.74|
225
+ |MMLU (5-Shot) |52.25|
226
+ |TruthfulQA (0-shot) |46.10|
227
+ |Winogrande (5-shot) |73.16|
228
+ |GSM8k (5-shot) |20.17|
229
+