PEFT
PyTorch
Safetensors
llama
Generated from Trainer
leaderboard-pr-bot commited on
Commit
ed30445
1 Parent(s): 3e1429f

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +16 -2
README.md CHANGED
@@ -1,9 +1,9 @@
1
  ---
2
- base_model: pints-ai/1.5-Pints-16K-v0.1
3
- library_name: peft
4
  license: mit
 
5
  tags:
6
  - generated_from_trainer
 
7
  model-index:
8
  - name: tangledgroup/tangled-llama-pints-1.5b-v0.1-instruct
9
  results: []
@@ -149,3 +149,17 @@ The following hyperparameters were used during training:
149
  - Pytorch 2.4.0
150
  - Datasets 2.20.0
151
  - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  license: mit
3
+ library_name: peft
4
  tags:
5
  - generated_from_trainer
6
+ base_model: pints-ai/1.5-Pints-16K-v0.1
7
  model-index:
8
  - name: tangledgroup/tangled-llama-pints-1.5b-v0.1-instruct
9
  results: []
 
149
  - Pytorch 2.4.0
150
  - Datasets 2.20.0
151
  - Tokenizers 0.19.1
152
+
153
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
154
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_tangledgroup__tangled-llama-pints-1.5b-v0.1-instruct)
155
+
156
+ | Metric |Value|
157
+ |-------------------|----:|
158
+ |Avg. | 4.18|
159
+ |IFEval (0-Shot) |15.09|
160
+ |BBH (3-Shot) | 3.84|
161
+ |MATH Lvl 5 (4-Shot)| 0.08|
162
+ |GPQA (0-shot) | 0.00|
163
+ |MuSR (0-shot) | 4.85|
164
+ |MMLU-PRO (5-shot) | 1.21|
165
+