Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -248,4 +248,17 @@ Remember, we're just getting started. This is just the beginning of a journey th
|
|
248 |
---
|
249 |
|
250 |
|
251 |
-
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
248 |
---
|
249 |
|
250 |
|
251 |
+
Check the GitHub for the code -> [GenZ](https://raw.githubusercontent.com/BudEcosystem/GenZ)
|
252 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
253 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__genz-13b-v2)
|
254 |
+
|
255 |
+
| Metric | Value |
|
256 |
+
|-----------------------|---------------------------|
|
257 |
+
| Avg. | 49.72 |
|
258 |
+
| ARC (25-shot) | 55.97 |
|
259 |
+
| HellaSwag (10-shot) | 79.98 |
|
260 |
+
| MMLU (5-shot) | 54.3 |
|
261 |
+
| TruthfulQA (0-shot) | 48.09 |
|
262 |
+
| Winogrande (5-shot) | 74.59 |
|
263 |
+
| GSM8K (5-shot) | 12.28 |
|
264 |
+
| DROP (3-shot) | 22.84 |
|