File size: 1,926 Bytes
eee5d1c
c9efbcd
1bfbf54
d040531
 
117da40
 
 
 
 
8805542
eee5d1c
117da40
eee5d1c
 
 
 
5f27f05
117da40
 
8805542
117da40
 
8805542
923c83a
 
8805542
436dfe2
923c83a
 
8805542
117da40
 
186b61d
117da40
 
 
8805542
117da40
8805542
117da40
8805542
117da40
 
8805542
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
title: APPS Metric
emoji: 📊
colorFrom: blue
colorTo: pink
datasets:
-  
tags:
- evaluate
- metric
description: "Evaluation metric for the APPS benchmark"
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false
---

# Metric Card for apps_metric 

## Metric Description
This metric is used to evaluate code generation on the [APPS benchmark](https://huggingface.co/datasets/codeparrot/apps).

## How to Use
You can load the metric and use it with the following commands:

```python
from evaluate import load
apps_metric = load('codeparrot/apps_metric')
# to evaluate generations made for all levels for example
results = apps_metric.compute(predictions=generations, level="all")
```

### Inputs
**generations** list(list(str)): List of code generations, each sub-list corresponds to the generations for a problem in APPS dataset, **the order of the samples in the dataset must be kept (with respect to the difficulty level)**.

### Output Values

**average accuracy**: when a single solution is generated, average accuracy computes the average of test cases that are passed.

**strict accuracy**: when a single solution is generated, strict accuracy computes the average number of problems that pass all their test cases.

**pass@k**: when multiple solutions are generated per problem, pass@k is the metric originally used for the [HumanEval](https://huggingface.co/datasets/openai_humaneval) benchmark. For more details please refer to the [metric space](https://huggingface.co/spaces/evaluate-metric/code_eval) and [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf).

## Citation
```
@article{hendrycksapps2021,
  title={Measuring Coding Challenge Competence With APPS},
  author={Dan Hendrycks and Steven Basart and Saurav Kadavath and Mantas Mazeika and Akul Arora and Ethan Guo and Collin Burns and Samir Puranik and Horace He and Dawn Song and Jacob Steinhardt},
  journal={NeurIPS},
  year={2021}
}
```