Adding Evaluation Results
#17
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -271,3 +271,17 @@ Our code and checkpoints are open to research purpose, and they are allowed for
|
|
271 |
|
272 |
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
|
273 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
271 |
|
272 |
If you are interested to leave a message to either our research team or product team, join our Discord or WeChat groups! Also, feel free to send an email to qianwen_opensource@alibabacloud.com.
|
273 |
|
274 |
+
|
275 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
276 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Qwen__Qwen-7B)
|
277 |
+
|
278 |
+
| Metric | Value |
|
279 |
+
|-----------------------|---------------------------|
|
280 |
+
| Avg. | 52.05 |
|
281 |
+
| ARC (25-shot) | 51.37 |
|
282 |
+
| HellaSwag (10-shot) | 78.47 |
|
283 |
+
| MMLU (5-shot) | 59.84 |
|
284 |
+
| TruthfulQA (0-shot) | 47.79 |
|
285 |
+
| Winogrande (5-shot) | 72.69 |
|
286 |
+
| GSM8K (5-shot) | 44.96 |
|
287 |
+
| DROP (3-shot) | 9.25 |
|