Datasets:
michaelmior
commited on
Add README
Browse files
README.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license:
|
5 |
+
- unknown
|
6 |
+
---
|
7 |
+
# JSON Schema Dataset
|
8 |
+
|
9 |
+
This dataset consists of a collection of JSON Schema documents collected from GitHub by searching using the Sourcegraph API.
|
10 |
+
|
11 |
+
# Step 1: Find a list of JSON Schema paths
|
12 |
+
|
13 |
+
The [Sourcegraph](https://sourcegraph.com/) code search API is used to find files with a .json extension and containing `{\n "$schema": "https://json-schema.org/"`.
|
14 |
+
This is somewhat restrictive, but still manages to find a large number of schemas.
|
15 |
+
|
16 |
+
pipenv run python slurp.py --outfile repos.csv
|
17 |
+
|
18 |
+
# Step 2: Download the JSON Schema files
|
19 |
+
|
20 |
+
This script will download each schema which comes from GitHub and save it into subfolders in the `data` directory.
|
21 |
+
|
22 |
+
./fetch_files.sh
|
23 |
+
|
24 |
+
# Step 3: Validate each JSON Schema
|
25 |
+
|
26 |
+
The following script will read each schema in the `data` directory and confirm that it is a valid JSON Schema.
|
27 |
+
A copy of all valid schemas will be placed in the `valid_data` directory.
|
28 |
+
Note that schemas are parsed as [JSON5](https://json5.org/) to be more permissive on what syntax is allowed but the final schemas are written as standard JSON.
|
29 |
+
|
30 |
+
pipenv run python validate_schemas.py
|
31 |
+
|
32 |
+
# Step 4: Split into train, test, and validation
|
33 |
+
|
34 |
+
Finally data is split into training, test, and validation sets.
|
35 |
+
Schemas are always grouped together in the same set based on the GitHub organization they are from.
|
36 |
+
Schemas can also be checked for similarity so that very similar schemas are grouped together.
|
37 |
+
|
38 |
+
pipenv run python train_split.py
|