Datasets:

Sub-tasks:
rdf-to-text
Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
c4f2851
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +183 -0
  3. dart.py +98 -0
  4. dataset_infos.json +1 -0
  5. dummy/0.0.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ - machine-generated
5
+ language_creators:
6
+ - crowdsourced
7
+ - machine-generated
8
+ languages:
9
+ - en
10
+ licenses:
11
+ - mit
12
+ multilinguality:
13
+ - monolingual
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - extended|wikitable_questions
18
+ - extended|wikisql
19
+ - extended|web_nlg
20
+ - extended|cleaned_e2e
21
+ task_categories:
22
+ - conditional-text-generation
23
+ task_ids:
24
+ - conditional-text-generation-other-rdf-to-text
25
+ ---
26
+
27
+ # Dataset Card Creation Guide
28
+
29
+ ## Table of Contents
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-instances)
37
+ - [Data Splits](#data-instances)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [homepahe](https://github.com/Yale-LILY/dart)
55
+ - **Repository:** [github](https://github.com/Yale-LILY/dart)
56
+ - **Paper:** [paper](https://arxiv.org/abs/2007.02871)
57
+ - **Leaderboard:** [leaderboard](https://github.com/Yale-LILY/dart#leaderboard)
58
+
59
+ ### Dataset Summary
60
+
61
+ DART is a large dataset for open-domain structured data record to text generation. We consider the structured data record input as a set of RDF entity-relation triples, a format widely used for knowledge representation and semantics description. DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. This hierarchical, structured format with its open-domain nature differentiates DART from other existing table-to-text corpora.
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ The task associated to DART is text generation from data records that are RDF triplets:
66
+
67
+ - `conditional-text-generation-other-rdf-to-text`: The dataset can be used to train a model for text generation from RDF triplets, which consists in generating textual description of structured data. Success on this task is typically measured by achieving a *high* [BLEU](https://huggingface.co/metrics/bleu), [METEOR](https://huggingface.co/metrics/meteor), [BLEURT](https://huggingface.co/metrics/bleurt), [TER](https://huggingface.co/metrics/ter), [MoverScore](https://huggingface.co/metrics/mover_score), and [BERTScore](https://huggingface.co/metrics/bert_score). The ([BART-large model](https://huggingface.co/facebook/bart-large) from [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following scores:
68
+
69
+ | | BLEU | METEOR | TER | MoverScore | BERTScore | BLEURT |
70
+ | ----- | ----- | ------ | ---- | ----------- | ---------- | ------ |
71
+ | BART | 37.06 | 0.36 | 0.57 | 0.44 | 0.92 | 0.22 |
72
+
73
+ This task has an active leaderboard which can be found [here](https://github.com/Yale-LILY/dart#leaderboard) and ranks models based on the above metrics while also reporting.
74
+
75
+ ### Languages
76
+
77
+ The dataset is in english (en).
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ Here is an example from the dataset:
84
+
85
+ ```
86
+ {'annotations': {'source': ['WikiTableQuestions_mturk'],
87
+ 'text': ['First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville']},
88
+ 'subtree_was_extended': False,
89
+ 'tripleset': [['First Clearing', 'LOCATION', 'On NYS 52 1 Mi. Youngsville'],
90
+ ['On NYS 52 1 Mi. Youngsville', 'CITY_OR_TOWN', 'Callicoon, New York']]}
91
+ ```
92
+
93
+ It contains one annotation where the textual description is 'First Clearing\tbased on Callicoon, New York and location at On NYS 52 1 Mi. Youngsville'. The RDF triplets considered to generate this description are in tripleset and are formatted as subject, predicate, object.
94
+
95
+ ### Data Fields
96
+
97
+ The different fields are:
98
+
99
+ - `annotations`:
100
+ - `text`: list of text descriptions of the triplets
101
+ - `source`: list of sources of the RDF triplets (WikiTable, e2e, etc.)
102
+ - `subtree_was_extended`: boolean, if the subtree condidered during the dataset construction was extended. Sometimes this field is missing, and therefore set to `None`
103
+ - `tripleset`: RDF triplets as a list of triplets of strings (subject, predicate, object)
104
+
105
+ ### Data Splits
106
+
107
+ There are three splits, train, validation and test:
108
+
109
+ | | Tain | Valid | Test |
110
+ | ----- | ------- | ----- | ---- |
111
+ | N. Examples | 30526 | 2768 | 6959 |
112
+
113
+ ## Dataset Creation
114
+
115
+ ### Curation Rationale
116
+
117
+ Automatically generating textual descriptions from structured data inputs is crucial to improving the accessibility of knowledge bases to lay users.
118
+
119
+ ### Source Data
120
+
121
+ DART comes from existing datasets that cover a variety of different domains while allowing to build a tree ontology and form RDF triple sets as semantic representations. The datasets used are WikiTableQuestions, WikiSQL, WebNLG and Cleaned E2E.
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
126
+ from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
127
+
128
+ #### Who are the source language producers?
129
+
130
+ [More Information Needed]
131
+
132
+ ### Annotations
133
+
134
+ DART is constructed using multiple complementary methods: (1) human annotation on open-domain Wikipedia tables
135
+ from WikiTableQuestions (Pasupat and Liang, 2015) and WikiSQL (Zhong et al., 2017), (2) automatic conversion of questions in WikiSQL to declarative sentences, and (3) incorporation of existing datasets including WebNLG 2017 (Gardent et al., 2017a,b; Shimorina and Gardent, 2018) and Cleaned E2E (Novikova et al., 2017b; Dušek et al., 2018, 2019)
136
+
137
+ #### Annotation process
138
+
139
+ The two stage annotation process for constructing tripleset sentence pairs is based on a tree-structured ontology of each table.
140
+ First, internal skilled annotators denote the parent column for each column header.
141
+ Then, a larger number of annotators provide a sentential description of an automatically-chosen subset of table cells in a row.
142
+
143
+ #### Who are the annotators?
144
+
145
+ [More Information Needed]
146
+
147
+ ### Personal and Sensitive Information
148
+
149
+ [More Information Needed]
150
+
151
+ ## Considerations for Using the Data
152
+
153
+ ### Social Impact of Dataset
154
+
155
+ [More Information Needed]
156
+
157
+ ### Discussion of Biases
158
+
159
+ [More Information Needed]
160
+
161
+ ### Other Known Limitations
162
+
163
+ [More Information Needed]
164
+
165
+ ## Additional Information
166
+
167
+ ### Dataset Curators
168
+
169
+ [More Information Needed]
170
+
171
+ ### Licensing Information
172
+
173
+ Under MIT license (see [here](https://github.com/Yale-LILY/dart/blob/master/LICENSE))
174
+
175
+ ### Citation Information
176
+
177
+ ```
178
+ @article{radev2020dart,
179
+ title={DART: Open-Domain Structured Data Record to Text Generation},
180
+ author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
181
+ journal={arXiv preprint arXiv:2007.02871},
182
+ year={2020}
183
+ ```
dart.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """DART: Open-Domain Structured Data Record to Text Generation"""
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @article{radev2020dart,
26
+ title={DART: Open-Domain Structured Data Record to Text Generation},
27
+ author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},
28
+ journal={arXiv preprint arXiv:2007.02871},
29
+ year={2020}
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality
34
+ sentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.
35
+ It consists of 82191 examples across different domains with each input being a semantic RDF triple set derived
36
+ from data records in tables and the tree ontology of table schema, annotated with sentence description that
37
+ covers all facts in the triple set.
38
+
39
+ DART is released in the following paper where you can find more details and baseline results:
40
+ https://arxiv.org/abs/2007.02871
41
+ """
42
+
43
+ _URL = "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/"
44
+ _TRAINING_FILE = "dart-v1.1.1-full-train.json"
45
+ _DEV_FILE = "dart-v1.1.1-full-dev.json"
46
+ _TEST_FILE = "dart-v1.1.1-full-test.json"
47
+
48
+
49
+ class Dart(datasets.GeneratorBasedBuilder):
50
+ """Dart dataset."""
51
+
52
+ def _info(self):
53
+ return datasets.DatasetInfo(
54
+ description=_DESCRIPTION,
55
+ features=datasets.Features(
56
+ {
57
+ "tripleset": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
58
+ "subtree_was_extended": datasets.Value("bool"),
59
+ "annotations": datasets.Sequence(
60
+ {
61
+ "source": datasets.Value("string"),
62
+ "text": datasets.Value("string"),
63
+ }
64
+ ),
65
+ }
66
+ ),
67
+ supervised_keys=None,
68
+ homepage="https://github.com/Yale-LILY/dart",
69
+ citation=_CITATION,
70
+ )
71
+
72
+ def _split_generators(self, dl_manager):
73
+ """Returns SplitGenerators."""
74
+ urls_to_download = {
75
+ "train": f"{_URL}{_TRAINING_FILE}",
76
+ "dev": f"{_URL}{_DEV_FILE}",
77
+ "test": f"{_URL}{_TEST_FILE}",
78
+ }
79
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
80
+
81
+ return [
82
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
83
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
84
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
85
+ ]
86
+
87
+ def _generate_examples(self, filepath):
88
+ with open(filepath, encoding="utf-8") as f:
89
+ data = json.loads(f.read())
90
+ for example_idx, example in enumerate(data):
91
+ yield example_idx, {
92
+ "tripleset": example["tripleset"],
93
+ "subtree_was_extended": example.get("subtree_was_extended", None), # some are missing
94
+ "annotations": {
95
+ "source": [annotation["source"] for annotation in example["annotations"]],
96
+ "text": [annotation["text"] for annotation in example["annotations"]],
97
+ },
98
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "DART is a large and open-domain structured DAta Record to Text generation corpus with high-quality\nsentence annotations with each input being a set of entity-relation triples following a tree-structured ontology.\nIt consists of 82191 examples across different domains with each input being a semantic RDF triple set derived\nfrom data records in tables and the tree ontology of table schema, annotated with sentence description that\ncovers all facts in the triple set.\n\nDART is released in the following paper where you can find more details and baseline results:\nhttps://arxiv.org/abs/2007.02871\n", "citation": "@article{radev2020dart,\n title={DART: Open-Domain Structured Data Record to Text Generation},\n author={Dragomir Radev and Rui Zhang and Amrit Rau and Abhinand Sivaprasad and Chiachun Hsieh and Nazneen Fatema Rajani and Xiangru Tang and Aadit Vyas and Neha Verma and Pranav Krishna and Yangxiaokang Liu and Nadia Irwanto and Jessica Pan and Faiaz Rahman and Ahmad Zaidi and Murori Mutuma and Yasin Tarabar and Ankit Gupta and Tao Yu and Yi Chern Tan and Xi Victoria Lin and Caiming Xiong and Richard Socher},\n journal={arXiv preprint arXiv:2007.02871},\n year={2020}\n", "homepage": "https://github.com/Yale-LILY/dart", "license": "", "features": {"tripleset": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "subtree_was_extended": {"dtype": "bool", "id": null, "_type": "Value"}, "annotations": {"feature": {"source": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "dart", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 12960061, "num_examples": 30526, "dataset_name": "dart"}, "validation": {"name": "validation", "num_bytes": 1457414, "num_examples": 2768, "dataset_name": "dart"}, "test": {"name": "test", "num_bytes": 2989087, "num_examples": 6959, "dataset_name": "dart"}}, "download_checksums": {"https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-train.json": {"num_bytes": 22001131, "checksum": "0671b56f4b090ccf1c0187364d45c6f1214421d6f1081a21800596860f314e70"}, "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-dev.json": {"num_bytes": 2370637, "checksum": "5038f3543b6d59b94ac4e3f69d15a0b01d8578913f862142e7c560200dd6e434"}, "https://raw.githubusercontent.com/Yale-LILY/dart/master/data/v1.1.1/dart-v1.1.1-full-test.json": {"num_bytes": 5001020, "checksum": "c772553b482dd5fc7b8ad90d68889062a2603e28d4449ee1f162006819e0565e"}}, "download_size": 29372788, "post_processing_size": null, "dataset_size": 17406562, "size_in_bytes": 46779350}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28f41f58ac17e0d7877900912b3569ec3cd900227ec897c15f4cc3dcff5d7456
3
+ size 2115