Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,151 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- text-generation
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- mathqa-x
|
9 |
+
- mathqa
|
10 |
+
- mxeval
|
11 |
+
pretty_name: mbxp
|
12 |
+
size_categories:
|
13 |
+
- 1K<n<10K
|
14 |
---
|
15 |
+
# MBXP
|
16 |
+
|
17 |
+
## Table of Contents
|
18 |
+
- [MathQA-X](#MathQA-X)
|
19 |
+
- [Table of Contents](#table-of-contents)
|
20 |
+
- [Dataset Description](#dataset-description)
|
21 |
+
- [Dataset Summary](#dataset-summary)
|
22 |
+
- [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
|
23 |
+
- [Languages](#languages)
|
24 |
+
- [Dataset Structure](#dataset-structure)
|
25 |
+
- [Data Instances](#data-instances)
|
26 |
+
- [Data Fields](#data-fields)
|
27 |
+
- [Data Splits](#data-splits)
|
28 |
+
- [Dataset Creation](#dataset-creation)
|
29 |
+
- [Curation Rationale](#curation-rationale)
|
30 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
31 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
32 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
33 |
+
- [Additional Information](#additional-information)
|
34 |
+
- [Dataset Curators](#dataset-curators)
|
35 |
+
- [Licensing Information](#licensing-information)
|
36 |
+
- [Citation Information](#citation-information)
|
37 |
+
- [Contributions](#contributions)
|
38 |
+
|
39 |
+
# MathQA-X
|
40 |
+
|
41 |
+
## Dataset Description
|
42 |
+
|
43 |
+
- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
|
44 |
+
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)
|
45 |
+
|
46 |
+
### Dataset Summary
|
47 |
+
|
48 |
+
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
|
49 |
+
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
|
50 |
+
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).
|
51 |
+
|
52 |
+
|
53 |
+
### Related Tasks and Leaderboards
|
54 |
+
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
|
55 |
+
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
|
56 |
+
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)
|
57 |
+
|
58 |
+
### Languages
|
59 |
+
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
|
60 |
+
|
61 |
+
|
62 |
+
## Dataset Structure
|
63 |
+
To lookup currently supported datasets
|
64 |
+
```python
|
65 |
+
get_dataset_config_names("amazon/mbxp")
|
66 |
+
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']
|
67 |
+
```
|
68 |
+
To load a specific dataset and language
|
69 |
+
```python
|
70 |
+
from datasets import load_dataset
|
71 |
+
load_dataset("mxeval/mathqa-x", "python")
|
72 |
+
DatasetDict({
|
73 |
+
test: Dataset({
|
74 |
+
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution'],
|
75 |
+
num_rows: 1883
|
76 |
+
})
|
77 |
+
})
|
78 |
+
```
|
79 |
+
|
80 |
+
### Data Instances
|
81 |
+
|
82 |
+
An example of a dataset instance:
|
83 |
+
|
84 |
+
```python
|
85 |
+
{
|
86 |
+
"task_id": "MathQA/0",
|
87 |
+
"language": "python",
|
88 |
+
"prompt": "def problem():\n \"\"\"\n a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ? n0 = 5.0 n1 = 31.1\n \"\"\"\n",
|
89 |
+
"test": "import math\ndef compare(x, y):\n return math.fabs(x-y)<1e-8\ncandidate = problem\nassert compare(candidate(), 38.0)\ndef check(x): pass\n",
|
90 |
+
"entry_point": "problem",
|
91 |
+
"canonical_solution": " n0 = 5.0\n n1 = 31.1\n t0 = n1 + 100.0\n t1 = 100.0 - n0\n t2 = t0 * 100.0\n t3 = t2 / t1\n answer = t3 - 100.0\n return answer\n"
|
92 |
+
}
|
93 |
+
```
|
94 |
+
|
95 |
+
### Data Fields
|
96 |
+
|
97 |
+
- `task_id`: identifier for the data sample
|
98 |
+
- `prompt`: input for the model containing function header and docstrings
|
99 |
+
- `canonical_solution`: solution for the problem in the `prompt`
|
100 |
+
- `description`: task description
|
101 |
+
- `test`: contains function to test generated code for correctness
|
102 |
+
- `entry_point`: entry point for test
|
103 |
+
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution
|
104 |
+
|
105 |
+
|
106 |
+
### Data Splits
|
107 |
+
|
108 |
+
- MathQA-X
|
109 |
+
- Python
|
110 |
+
- Java
|
111 |
+
- Javascript
|
112 |
+
|
113 |
+
## Dataset Creation
|
114 |
+
|
115 |
+
### Curation Rationale
|
116 |
+
|
117 |
+
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
|
118 |
+
|
119 |
+
### Personal and Sensitive Information
|
120 |
+
|
121 |
+
None.
|
122 |
+
|
123 |
+
## Considerations for Using the Data
|
124 |
+
Make sure to sandbox the execution environment.
|
125 |
+
|
126 |
+
### Social Impact of Dataset
|
127 |
+
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
|
128 |
+
|
129 |
+
### Dataset Curators
|
130 |
+
AWS AI Labs
|
131 |
+
|
132 |
+
### Licensing Information
|
133 |
+
|
134 |
+
[LICENSE](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/mathqa-x-LICENSE) <br>
|
135 |
+
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/mathqa-x/blob/main/THIRD_PARTY_LICENSES)
|
136 |
+
|
137 |
+
### Citation Information
|
138 |
+
```
|
139 |
+
@inproceedings{
|
140 |
+
athiwaratkun2023multilingual,
|
141 |
+
title={Multi-lingual Evaluation of Code Generation Models},
|
142 |
+
author={Ben Athiwaratkun and Sanjay Krishna Gouda and Zijian Wang and Xiaopeng Li and Yuchen Tian and Ming Tan and Wasi Uddin Ahmad and Shiqi Wang and Qing Sun and Mingyue Shang and Sujan Kumar Gonugondla and Hantian Ding and Varun Kumar and Nathan Fulton and Arash Farahani and Siddhartha Jain and Robert Giaquinto and Haifeng Qian and Murali Krishna Ramanathan and Ramesh Nallapati and Baishakhi Ray and Parminder Bhatia and Sudipta Sengupta and Dan Roth and Bing Xiang},
|
143 |
+
booktitle={The Eleventh International Conference on Learning Representations },
|
144 |
+
year={2023},
|
145 |
+
url={https://openreview.net/forum?id=Bo7eeXm6An8}
|
146 |
+
}
|
147 |
+
```
|
148 |
+
|
149 |
+
### Contributions
|
150 |
+
|
151 |
+
[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)
|