leaderboard-pr-bot commited on
Commit
c5d085c
1 Parent(s): 298d208

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +118 -2
README.md CHANGED
@@ -1,10 +1,113 @@
1
  ---
2
  license: other
3
- license_name: microsoft-research-license
4
  tags:
5
  - storywriting
6
  - text adventure
7
  - not-for-all-audiences
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6459a451abdbb77c4c6d8258/uNoKlBulkRF3mCoMgetGs.png)
@@ -46,4 +149,17 @@ Despite that, we have tested the model out to 16000 context via Rope scaling and
46
 
47
  Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [jebcarter@pm.me].
48
 
49
- Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model, and to gelukuMLG for the model mascot!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
 
3
  tags:
4
  - storywriting
5
  - text adventure
6
  - not-for-all-audiences
7
+ license_name: microsoft-research-license
8
+ model-index:
9
+ - name: psyonic-cetacean-20B
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ dataset:
15
+ name: AI2 Reasoning Challenge (25-Shot)
16
+ type: ai2_arc
17
+ config: ARC-Challenge
18
+ split: test
19
+ args:
20
+ num_few_shot: 25
21
+ metrics:
22
+ - type: acc_norm
23
+ value: 63.57
24
+ name: normalized accuracy
25
+ source:
26
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
27
+ name: Open LLM Leaderboard
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: HellaSwag (10-Shot)
33
+ type: hellaswag
34
+ split: validation
35
+ args:
36
+ num_few_shot: 10
37
+ metrics:
38
+ - type: acc_norm
39
+ value: 86.2
40
+ name: normalized accuracy
41
+ source:
42
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
43
+ name: Open LLM Leaderboard
44
+ - task:
45
+ type: text-generation
46
+ name: Text Generation
47
+ dataset:
48
+ name: MMLU (5-Shot)
49
+ type: cais/mmlu
50
+ config: all
51
+ split: test
52
+ args:
53
+ num_few_shot: 5
54
+ metrics:
55
+ - type: acc
56
+ value: 59.66
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: TruthfulQA (0-shot)
66
+ type: truthful_qa
67
+ config: multiple_choice
68
+ split: validation
69
+ args:
70
+ num_few_shot: 0
71
+ metrics:
72
+ - type: mc2
73
+ value: 57.55
74
+ source:
75
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Winogrande (5-shot)
82
+ type: winogrande
83
+ config: winogrande_xl
84
+ split: validation
85
+ args:
86
+ num_few_shot: 5
87
+ metrics:
88
+ - type: acc
89
+ value: 78.14
90
+ name: accuracy
91
+ source:
92
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: GSM8k (5-shot)
99
+ type: gsm8k
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 14.71
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jebcarter/psyonic-cetacean-20B
110
+ name: Open LLM Leaderboard
111
  ---
112
 
113
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6459a451abdbb77c4c6d8258/uNoKlBulkRF3mCoMgetGs.png)
 
149
 
150
  Please enjoy, and if you encounter anything exciting or weird, please reach out to me at [jebcarter@pm.me].
151
 
152
+ Special thanks as always to the KoboldAI crew who provided the mergebox, testing, and feedback on this model, and to gelukuMLG for the model mascot!
153
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
154
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jebcarter__psyonic-cetacean-20B)
155
+
156
+ | Metric |Value|
157
+ |---------------------------------|----:|
158
+ |Avg. |59.97|
159
+ |AI2 Reasoning Challenge (25-Shot)|63.57|
160
+ |HellaSwag (10-Shot) |86.20|
161
+ |MMLU (5-Shot) |59.66|
162
+ |TruthfulQA (0-shot) |57.55|
163
+ |Winogrande (5-shot) |78.14|
164
+ |GSM8k (5-shot) |14.71|
165
+