leaderboard-pr-bot commited on
Commit
bc5f201
1 Parent(s): 71fd433

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +109 -1
README.md CHANGED
@@ -2,6 +2,101 @@
2
  license: llama3.2
3
  base_model:
4
  - meta-llama/Llama-3.2-1B-Instruct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # MedIT SUN 2.4B
@@ -34,4 +129,17 @@ base_model:
34
  - General conversation and task-oriented interactions
35
 
36
  **Limitations**
37
- As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: llama3.2
3
  base_model:
4
  - meta-llama/Llama-3.2-1B-Instruct
5
+ model-index:
6
+ - name: Llama-3.2-SUN-2.4B-v1.0.0
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: IFEval (0-Shot)
13
+ type: HuggingFaceH4/ifeval
14
+ args:
15
+ num_few_shot: 0
16
+ metrics:
17
+ - type: inst_level_strict_acc and prompt_level_strict_acc
18
+ value: 53.89
19
+ name: strict accuracy
20
+ source:
21
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
22
+ name: Open LLM Leaderboard
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: BBH (3-Shot)
28
+ type: BBH
29
+ args:
30
+ num_few_shot: 3
31
+ metrics:
32
+ - type: acc_norm
33
+ value: 6.46
34
+ name: normalized accuracy
35
+ source:
36
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
37
+ name: Open LLM Leaderboard
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: MATH Lvl 5 (4-Shot)
43
+ type: hendrycks/competition_math
44
+ args:
45
+ num_few_shot: 4
46
+ metrics:
47
+ - type: exact_match
48
+ value: 3.25
49
+ name: exact match
50
+ source:
51
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
52
+ name: Open LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: GPQA (0-shot)
58
+ type: Idavidrein/gpqa
59
+ args:
60
+ num_few_shot: 0
61
+ metrics:
62
+ - type: acc_norm
63
+ value: 0.0
64
+ name: acc_norm
65
+ source:
66
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: MuSR (0-shot)
73
+ type: TAUR-Lab/MuSR
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: acc_norm
78
+ value: 2.38
79
+ name: acc_norm
80
+ source:
81
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
82
+ name: Open LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: MMLU-PRO (5-shot)
88
+ type: TIGER-Lab/MMLU-Pro
89
+ config: main
90
+ split: test
91
+ args:
92
+ num_few_shot: 5
93
+ metrics:
94
+ - type: acc
95
+ value: 5.91
96
+ name: accuracy
97
+ source:
98
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=meditsolutions/Llama-3.2-SUN-2.4B-v1.0.0
99
+ name: Open LLM Leaderboard
100
  ---
101
 
102
  # MedIT SUN 2.4B
 
129
  - General conversation and task-oriented interactions
130
 
131
  **Limitations**
132
+ As the model is still in training, performance and capabilities may vary. Users should be aware that the model is not in its final form and may exhibit inconsistencies or limitations typical of in-progress AI models.
133
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
134
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_meditsolutions__Llama-3.2-SUN-2.4B-v1.0.0)
135
+
136
+ | Metric |Value|
137
+ |-------------------|----:|
138
+ |Avg. |11.98|
139
+ |IFEval (0-Shot) |53.89|
140
+ |BBH (3-Shot) | 6.46|
141
+ |MATH Lvl 5 (4-Shot)| 3.25|
142
+ |GPQA (0-shot) | 0.00|
143
+ |MuSR (0-shot) | 2.38|
144
+ |MMLU-PRO (5-shot) | 5.91|
145
+