Adding Evaluation Results
#1
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -1,7 +1,6 @@
|
|
1 |
---
|
2 |
-
library_name: transformers
|
3 |
license: llama3.1
|
4 |
-
|
5 |
tags:
|
6 |
- alignment-handbook
|
7 |
- trl
|
@@ -10,6 +9,7 @@ tags:
|
|
10 |
- trl
|
11 |
- sft
|
12 |
- generated_from_trainer
|
|
|
13 |
datasets:
|
14 |
- argilla-warehouse/magpie-ultra-v1.0
|
15 |
model-index:
|
@@ -70,3 +70,17 @@ The following hyperparameters were used during training:
|
|
70 |
- Pytorch 2.4.1+cu121
|
71 |
- Datasets 3.0.1
|
72 |
- Tokenizers 0.20.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
2 |
license: llama3.1
|
3 |
+
library_name: transformers
|
4 |
tags:
|
5 |
- alignment-handbook
|
6 |
- trl
|
|
|
9 |
- trl
|
10 |
- sft
|
11 |
- generated_from_trainer
|
12 |
+
base_model: meta-llama/Llama-3.1-8B
|
13 |
datasets:
|
14 |
- argilla-warehouse/magpie-ultra-v1.0
|
15 |
model-index:
|
|
|
70 |
- Pytorch 2.4.1+cu121
|
71 |
- Datasets 3.0.1
|
72 |
- Tokenizers 0.20.0
|
73 |
+
|
74 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
75 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla-warehouse__Llama-3.1-8B-MagPie-Ultra)
|
76 |
+
|
77 |
+
| Metric |Value|
|
78 |
+
|-------------------|----:|
|
79 |
+
|Avg. |19.46|
|
80 |
+
|IFEval (0-Shot) |57.57|
|
81 |
+
|BBH (3-Shot) |23.52|
|
82 |
+
|MATH Lvl 5 (4-Shot)| 5.36|
|
83 |
+
|GPQA (0-shot) | 2.24|
|
84 |
+
|MuSR (0-shot) | 4.25|
|
85 |
+
|MMLU-PRO (5-shot) |23.82|
|
86 |
+
|