Datasets:

Languages:
English
ArXiv:
License:
skgouda commited on
Commit
58475a3
1 Parent(s): b56b84d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -25,11 +25,14 @@ size_categories:
25
  - [Data Instances](#data-instances)
26
  - [Data Fields](#data-fields)
27
  - [Data Splits](#data-splits)
 
 
 
28
  - [Dataset Creation](#dataset-creation)
29
  - [Curation Rationale](#curation-rationale)
30
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
31
- - [Considerations for Using the Data](#considerations-for-using-the-data)
32
  - [Social Impact of Dataset](#social-impact-of-dataset)
 
33
  - [Additional Information](#additional-information)
34
  - [Dataset Curators](#dataset-curators)
35
  - [Licensing Information](#licensing-information)
@@ -120,12 +123,26 @@ Since code generation models are often trained on dumps of GitHub a dataset not
120
 
121
  None.
122
 
123
- ## Considerations for Using the Data
124
- Make sure to sandbox the execution environment.
125
-
126
  ### Social Impact of Dataset
127
  With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  ### Dataset Curators
130
  AWS AI Labs
131
 
 
25
  - [Data Instances](#data-instances)
26
  - [Data Fields](#data-fields)
27
  - [Data Splits](#data-splits)
28
+ - [Executional Correctness](#execution)
29
+ - [Execution Example](#execution-example)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
  - [Dataset Creation](#dataset-creation)
32
  - [Curation Rationale](#curation-rationale)
33
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
 
34
  - [Social Impact of Dataset](#social-impact-of-dataset)
35
+
36
  - [Additional Information](#additional-information)
37
  - [Dataset Curators](#dataset-curators)
38
  - [Licensing Information](#licensing-information)
 
123
 
124
  None.
125
 
 
 
 
126
  ### Social Impact of Dataset
127
  With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
128
 
129
+ ## Execution
130
+
131
+ ### Execution Example
132
+ Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.
133
+
134
+ ```python
135
+ >>> from datasets import load_dataset
136
+ >>> from mxeval.execution import check_correctness
137
+ >>> mathqa_python = load_dataset("mxeval/mathqa-x", "python", split="test")
138
+ >>> example_problem = mathqa_python[0]
139
+ >>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
140
+ {'task_id': 'MathQA/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.673357009887695}
141
+ ```
142
+
143
+ ### Considerations for Using the Data
144
+ Make sure to sandbox the execution environment.
145
+
146
  ### Dataset Curators
147
  AWS AI Labs
148