rongzhangibm commited on
Commit
440f897
1 Parent(s): 901cd75

added README.md

Browse files
Files changed (1) hide show
  1. README.md +227 -0
README.md ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-sa-3.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Natural Questions
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - open-domain-qa
21
+ paperswithcode_id: natural-questions
22
+ ---
23
+
24
+ # Dataset Card for Natural Questions
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [https://ai.google.com/research/NaturalQuestions/dataset](https://ai.google.com/research/NaturalQuestions/dataset)
53
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
54
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
55
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
56
+ - **Size of downloaded dataset files:** 42981 MB
57
+ - **Size of the generated dataset:** 139706 MB
58
+ - **Total amount of disk used:** 182687 MB
59
+
60
+ ### Dataset Summary
61
+
62
+ The NQ corpus contains questions from real users, and it requires QA systems to
63
+ read and comprehend an entire Wikipedia article that may or may not contain the
64
+ answer to the question. The inclusion of real user questions, and the
65
+ requirement that solutions should read an entire page to find the answer, cause
66
+ NQ to be a more realistic and challenging task than prior QA datasets.
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
71
+
72
+ ### Languages
73
+
74
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+
80
+ #### default
81
+
82
+ - **Size of downloaded dataset files:** 42981 MB
83
+ - **Size of the generated dataset:** 139706 MB
84
+ - **Total amount of disk used:** 182687 MB
85
+
86
+ An example of 'train' looks as follows.
87
+ ```
88
+
89
+ ```
90
+
91
+ ### Data Fields
92
+
93
+ The data fields are the same among all splits.
94
+
95
+ #### default
96
+
97
+ ```
98
+ "id": datasets.Value("string"),
99
+ "document": {
100
+ "title": datasets.Value("string"),
101
+ "url": datasets.Value("string"),
102
+ "html": datasets.Value("string"),
103
+ "tokens": datasets.features.Sequence(
104
+ {
105
+ "token": datasets.Value("string"),
106
+ "is_html": datasets.Value("bool"),
107
+ "start_byte": datasets.Value("int64"),
108
+ "end_byte": datasets.Value("int64"),
109
+ }
110
+ ),
111
+ },
112
+ "question": {
113
+ "text": datasets.Value("string"),
114
+ "tokens": datasets.features.Sequence(datasets.Value("string")),
115
+ },
116
+ "long_answer_candidates": datasets.features.Sequence(
117
+ {
118
+ "start_token": datasets.Value("int64"),
119
+ "end_token": datasets.Value("int64"),
120
+ "start_byte": datasets.Value("int64"),
121
+ "end_byte": datasets.Value("int64"),
122
+ "top_level": datasets.Value("bool"),
123
+ }
124
+ ),
125
+ "annotations": datasets.features.Sequence(
126
+ {
127
+ "id": datasets.Value("string"),
128
+ "long_answer": {
129
+ "start_token": datasets.Value("int64"),
130
+ "end_token": datasets.Value("int64"),
131
+ "start_byte": datasets.Value("int64"),
132
+ "end_byte": datasets.Value("int64"),
133
+ "candidate_index": datasets.Value("int64")
134
+ },
135
+ "short_answers": datasets.features.Sequence(
136
+ {
137
+ "start_token": datasets.Value("int64"),
138
+ "end_token": datasets.Value("int64"),
139
+ "start_byte": datasets.Value("int64"),
140
+ "end_byte": datasets.Value("int64"),
141
+ "text": datasets.Value("string"),
142
+ }
143
+ ),
144
+ "yes_no_answer": datasets.features.ClassLabel(
145
+ names=["NO", "YES"]
146
+ ), # Can also be -1 for NONE.
147
+ }
148
+ )
149
+ ```
150
+
151
+
152
+ ### Data Splits
153
+
154
+ | name | train | validation |
155
+ |---------|-------:|-----------:|
156
+ | default | 307373 | 7830 |
157
+ | dev | N/A | 7830 |
158
+
159
+ ## Dataset Creation
160
+
161
+ ### Curation Rationale
162
+
163
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
164
+
165
+ ### Source Data
166
+
167
+ #### Initial Data Collection and Normalization
168
+
169
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
+
171
+ #### Who are the source language producers?
172
+
173
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
174
+
175
+ ### Annotations
176
+
177
+ #### Annotation process
178
+
179
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
+
181
+ #### Who are the annotators?
182
+
183
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
184
+
185
+ ### Personal and Sensitive Information
186
+
187
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+
189
+ ## Considerations for Using the Data
190
+
191
+ ### Social Impact of Dataset
192
+
193
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
+
195
+ ### Discussion of Biases
196
+
197
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
198
+
199
+ ### Other Known Limitations
200
+
201
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
202
+
203
+ ## Additional Information
204
+
205
+ ### Dataset Curators
206
+
207
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
208
+
209
+ ### Licensing Information
210
+
211
+ [Creative Commons Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/).
212
+
213
+ ### Citation Information
214
+
215
+ ```
216
+
217
+ @article{47761,
218
+ title = {Natural Questions: a Benchmark for Question Answering Research},
219
+ author = {Tom Kwiatkowski and Jennimaria Palomaki and Olivia Redfield and Michael Collins and Ankur Parikh and Chris Alberti and Danielle Epstein and Illia Polosukhin and Matthew Kelcey and Jacob Devlin and Kenton Lee and Kristina N. Toutanova and Llion Jones and Ming-Wei Chang and Andrew Dai and Jakob Uszkoreit and Quoc Le and Slav Petrov},
220
+ year = {2019},
221
+ journal = {Transactions of the Association of Computational Linguistics}
222
+ }
223
+
224
+ ```
225
+
226
+
227
+ ### Contributions