Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 1,768 Bytes
5f9b9ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9f91866
 
 
 
 
 
 
 
 
5f9b9ee
 
 
 
 
9f91866
5f9b9ee
 
 
 
 
 
 
9f91866
5f9b9ee
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
language:
- en
license:
- unknown
---
# JSON Schema Dataset

This dataset consists of a collection of JSON Schema documents collected from GitHub by searching using the Sourcegraph API.

# Step 1: Find a list of JSON Schema paths

The [Sourcegraph](https://sourcegraph.com/) code search API is used to find files with a .json extension and containing `{\n  "$schema": "https://json-schema.org/"`.
This is somewhat restrictive, but still manages to find a large number of schemas.

    pipenv run python slurp.py --outfile repos.csv

# Step 2: Fetch the history information for each file

We fetch every revision of each JSON Schema file.
Before downloading the files, we use the GitHub API to get the list of commit hashes.
The resulting data is saved to `commits.json`.

    pipenv run python fetch_history.py

# Step 3: Download the JSON Schema files

This script will download each schema which comes from GitHub and save it into subfolders in the `data` directory.

    ./fetch_files.sh

# Step 4: Validate each JSON Schema

The following script will read each schema in the `data` directory and confirm that it is a valid JSON Schema.
A copy of all valid schemas will be placed in the `valid_data` directory.
Note that schemas are parsed as [JSON5](https://json5.org/) to be more permissive on what syntax is allowed but the final schemas are written as standard JSON.

    pipenv run python validate_schemas.py

# Step 5: Split into train, test, and validation

Finally data is split into training, test, and validation sets.
Schemas are always grouped together in the same set based on the GitHub organization they are from.
Schemas can also be checked for similarity so that very similar schemas are grouped together.

    pipenv run python train_split.py