Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,50 +1,47 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
datasets:
|
4 |
-
|
5 |
tags:
|
6 |
- evaluate
|
7 |
- metric
|
8 |
-
description: "
|
9 |
sdk: gradio
|
10 |
sdk_version: 3.0.2
|
11 |
app_file: app.py
|
12 |
pinned: false
|
13 |
---
|
14 |
|
15 |
-
# Metric Card for apps-metric
|
16 |
-
|
17 |
-
***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
|
18 |
|
19 |
## Metric Description
|
20 |
-
|
21 |
|
22 |
## How to Use
|
23 |
-
|
24 |
-
|
25 |
-
|
|
|
|
|
|
|
26 |
|
27 |
### Inputs
|
28 |
-
|
29 |
-
- **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
|
30 |
|
31 |
### Output Values
|
32 |
|
33 |
-
|
34 |
-
|
35 |
-
*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
|
36 |
|
37 |
-
|
38 |
-
*Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
|
39 |
|
40 |
-
|
41 |
-
*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
|
42 |
-
|
43 |
-
## Limitations and Bias
|
44 |
-
*Note any known limitations or biases that the metric has, with links and references if possible.*
|
45 |
|
46 |
## Citation
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: apps_metric
|
3 |
datasets:
|
4 |
-
|
5 |
tags:
|
6 |
- evaluate
|
7 |
- metric
|
8 |
+
description: "Evaluation metric for the APPS benchmark"
|
9 |
sdk: gradio
|
10 |
sdk_version: 3.0.2
|
11 |
app_file: app.py
|
12 |
pinned: false
|
13 |
---
|
14 |
|
15 |
+
# Metric Card for apps-metric [WIP]
|
|
|
|
|
16 |
|
17 |
## Metric Description
|
18 |
+
This metric is used to evaluate code generation on the [APPS benchmark](https://huggingface.co/datasets/codeparrot/apps).
|
19 |
|
20 |
## How to Use
|
21 |
+
You can load the metric and use it with the following commands:
|
22 |
+
```
|
23 |
+
from evaluate import load
|
24 |
+
glue_metric = load('loubnabnl/apps_metric')
|
25 |
+
results = apps_metric.compute(predictions=generations)
|
26 |
+
```
|
27 |
|
28 |
### Inputs
|
29 |
+
**generations** (list(str)): List of code generations, each sub-list corresponds to the generation for a problem in APPS dataset, the order of the samples in the dataset must be kept.
|
|
|
30 |
|
31 |
### Output Values
|
32 |
|
33 |
+
**average accuracy**: when a single solution is generated, average accuracy computes the average of test cases that are passed.
|
|
|
|
|
34 |
|
35 |
+
**strict accuracy**: when a single solution is generated, strict accuracy computes the average number of problems that pass all their test cases.
|
|
|
36 |
|
37 |
+
**pass@k**: when multiple solutions are generated per problem, pass@k is the metric originally used for the [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark. For more details please refer to the [metric space](https://huggingface.co/spaces/evaluate-metric/code_eval) and [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf).
|
|
|
|
|
|
|
|
|
38 |
|
39 |
## Citation
|
40 |
+
```
|
41 |
+
@article{hendrycksapps2021,
|
42 |
+
title={Measuring Coding Challenge Competence With APPS},
|
43 |
+
author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt},
|
44 |
+
journal={NeurIPS},
|
45 |
+
year={2021}
|
46 |
+
}
|
47 |
+
```
|