Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
json-schema / README.md
michaelmior's picture
Add README
5f9b9ee verified
|
raw
history blame
1.49 kB
metadata
language:
  - en
license:
  - unknown

JSON Schema Dataset

This dataset consists of a collection of JSON Schema documents collected from GitHub by searching using the Sourcegraph API.

Step 1: Find a list of JSON Schema paths

The Sourcegraph code search API is used to find files with a .json extension and containing {\n "$schema": "https://json-schema.org/". This is somewhat restrictive, but still manages to find a large number of schemas.

pipenv run python slurp.py --outfile repos.csv

Step 2: Download the JSON Schema files

This script will download each schema which comes from GitHub and save it into subfolders in the data directory.

./fetch_files.sh

Step 3: Validate each JSON Schema

The following script will read each schema in the data directory and confirm that it is a valid JSON Schema. A copy of all valid schemas will be placed in the valid_data directory. Note that schemas are parsed as JSON5 to be more permissive on what syntax is allowed but the final schemas are written as standard JSON.

pipenv run python validate_schemas.py

Step 4: Split into train, test, and validation

Finally data is split into training, test, and validation sets. Schemas are always grouped together in the same set based on the GitHub organization they are from. Schemas can also be checked for similarity so that very similar schemas are grouped together.

pipenv run python train_split.py