--- license: mit dataset_info: features: - name: input_ids sequence: int64 - name: attention_mask sequence: int64 - name: labels sequence: int64 splits: - name: train num_bytes: 257967900 num_examples: 20973 - name: val num_bytes: 45891300 num_examples: 3731 download_size: 10916827 dataset_size: 303859200 language: - en pretty_name: github-commits size_categories: - n<1K --- This dataset contains code changes in each commit of most starred python project, stored on GitHub. ## Code to reproduce the parsing process To parse code we performed the following steps: * Get list of most starred GitHub repos via API * With **git** python package clone all the repos from the list to local machine and write code defference for each commit of every repo to the dataset. * Clean dataset to remove to large commits, commits with not python code changes, commits with non-ASCII chars, etc. * Group files changed in 1 commit into single sample of the dataset. To reproduce these steps you need to: 1) run *src/github_parsing.ipynb* to parse repos from github 2) to clean the data and group dataset samples run *src/data_cleaning.ipynb* ## Dataset features Dataset have the following features: 1) repo_name 2) commit_message 3) commit_changes - changes in code in all python files, contained in the commit 4) files_changed - number of files, changed in the commit 5) changes_len - number of chars in the code changes For model training we used only *commit_message* feature as a label and *commit_changes* as an input for the model. Code changes have the following structure: ``` name_of_the_file code_of_changes ``` Special tokens used in the input: * - used to separate name of the file * and used to separate added or deleted lines of code in the commit * used to separate commit message Example of input for the model: ``` a/tests/test_constraint.py b/tests/test_constraint.py --- a/tests/test_constraint.py +++ b/tests/test_constraint.py @@ -87,10 +87,15 @@ def test_accurate_approximation_when_known(): n_iter=10, ) - params = optimizer.res[0]["params"] - x, y = params['x'], params['y'] + # Exclude the last sampled point, because the constraint is not fitted on that. + res = np.array([[r['target'], r['constraint'], r['params']['x'], r['params']['y']] for r in optimizer.res[:-1]]) + + xy = res[:, [2, 3]] + x = res[:, 2] + y = res[:, 3] - assert constraint_function(x, y) == approx(conmod.approx(np.array([x, y])), rel=1e-5, abs=1e-5) + assert constraint_function(x, y) == approx(conmod.approx(xy), rel=1e-5, abs=1e-5) + assert constraint_function(x, y) == approx(optimizer.space.constraint_values[:-1], rel=1e-5, abs=1e-5) def test_multiple_constraints(): In case of commit with the several files changed, different files are separated with 3 blank lines. ``` In case of commit with the several files changed, different files are separated with 3 blank lines.