yueqis commited on
Commit
9672ee2
1 Parent(s): 907d87b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +320 -0
README.md ADDED
@@ -0,0 +1,320 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ language:
7
+ - ar
8
+ - bn
9
+ - en
10
+ - fi
11
+ - id
12
+ - ja
13
+ - ko
14
+ - ru
15
+ - sw
16
+ - te
17
+ - th
18
+ license:
19
+ - apache-2.0
20
+ multilinguality:
21
+ - multilingual
22
+ size_categories:
23
+ - unknown
24
+ source_datasets:
25
+ - extended|wikipedia
26
+ task_categories:
27
+ - question-answering
28
+ task_ids:
29
+ - extractive-qa
30
+ paperswithcode_id: tydi-qa
31
+ pretty_name: TyDi QA
32
+ dataset_info:
33
+ - config_name: primary_task
34
+ features:
35
+ - name: passage_answer_candidates
36
+ sequence:
37
+ - name: plaintext_start_byte
38
+ dtype: int32
39
+ - name: plaintext_end_byte
40
+ dtype: int32
41
+ - name: question_text
42
+ dtype: string
43
+ - name: document_title
44
+ dtype: string
45
+ - name: language
46
+ dtype: string
47
+ - name: annotations
48
+ sequence:
49
+ - name: passage_answer_candidate_index
50
+ dtype: int32
51
+ - name: minimal_answers_start_byte
52
+ dtype: int32
53
+ - name: minimal_answers_end_byte
54
+ dtype: int32
55
+ - name: yes_no_answer
56
+ dtype: string
57
+ - name: document_plaintext
58
+ dtype: string
59
+ - name: document_url
60
+ dtype: string
61
+ splits:
62
+ - name: train
63
+ num_bytes: 5550573801
64
+ num_examples: 166916
65
+ - name: validation
66
+ num_bytes: 484380347
67
+ num_examples: 18670
68
+ download_size: 2912112378
69
+ dataset_size: 6034954148
70
+ - config_name: secondary_task
71
+ features:
72
+ - name: id
73
+ dtype: string
74
+ - name: title
75
+ dtype: string
76
+ - name: context
77
+ dtype: string
78
+ - name: question
79
+ dtype: string
80
+ - name: answers
81
+ sequence:
82
+ - name: text
83
+ dtype: string
84
+ - name: answer_start
85
+ dtype: int32
86
+ splits:
87
+ - name: train
88
+ num_bytes: 52948467
89
+ num_examples: 49881
90
+ - name: validation
91
+ num_bytes: 5006433
92
+ num_examples: 5077
93
+ download_size: 29402238
94
+ dataset_size: 57954900
95
+ configs:
96
+ - config_name: primary_task
97
+ data_files:
98
+ - split: train
99
+ path: primary_task/train-*
100
+ - split: validation
101
+ path: primary_task/validation-*
102
+ - config_name: secondary_task
103
+ data_files:
104
+ - split: train
105
+ path: secondary_task/train-*
106
+ - split: validation
107
+ path: secondary_task/validation-*
108
+ ---
109
+
110
+ # Dataset Card for "tydiqa"
111
+
112
+ ## Table of Contents
113
+ - [Dataset Description](#dataset-description)
114
+ - [Dataset Summary](#dataset-summary)
115
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
116
+ - [Languages](#languages)
117
+ - [Dataset Structure](#dataset-structure)
118
+ - [Data Instances](#data-instances)
119
+ - [Data Fields](#data-fields)
120
+ - [Data Splits](#data-splits)
121
+ - [Dataset Creation](#dataset-creation)
122
+ - [Curation Rationale](#curation-rationale)
123
+ - [Source Data](#source-data)
124
+ - [Annotations](#annotations)
125
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
126
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
127
+ - [Social Impact of Dataset](#social-impact-of-dataset)
128
+ - [Discussion of Biases](#discussion-of-biases)
129
+ - [Other Known Limitations](#other-known-limitations)
130
+ - [Additional Information](#additional-information)
131
+ - [Dataset Curators](#dataset-curators)
132
+ - [Licensing Information](#licensing-information)
133
+ - [Citation Information](#citation-information)
134
+ - [Contributions](#contributions)
135
+
136
+ ## Dataset Description
137
+
138
+ - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
139
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
141
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
+ - **Size of downloaded dataset files:** 3.91 GB
143
+ - **Size of the generated dataset:** 6.10 GB
144
+ - **Total amount of disk used:** 10.00 GB
145
+
146
+ ### Dataset Summary
147
+
148
+ TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs.
149
+ The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language
150
+ expresses -- such that we expect models performing well on this set to generalize across a large number of the languages
151
+ in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic
152
+ information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but
153
+ don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without
154
+ the use of translation (unlike MLQA and XQuAD).
155
+
156
+ ### Supported Tasks and Leaderboards
157
+
158
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
+
160
+ ### Languages
161
+
162
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
+
164
+ ## Dataset Structure
165
+
166
+ ### Data Instances
167
+
168
+ #### primary_task
169
+
170
+ - **Size of downloaded dataset files:** 1.95 GB
171
+ - **Size of the generated dataset:** 6.04 GB
172
+ - **Total amount of disk used:** 7.99 GB
173
+
174
+ An example of 'validation' looks as follows.
175
+ ```
176
+ This example was too long and was cropped:
177
+
178
+ {
179
+ "annotations": {
180
+ "minimal_answers_end_byte": [-1, -1, -1],
181
+ "minimal_answers_start_byte": [-1, -1, -1],
182
+ "passage_answer_candidate_index": [-1, -1, -1],
183
+ "yes_no_answer": ["NONE", "NONE", "NONE"]
184
+ },
185
+ "document_plaintext": "\"\\nรองศาสตราจารย์[1] หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร (22 กันยายน 2495 -) ผู้ว่าราชการกรุงเทพมหานครคนที่ 15 อดีตรองหัวหน้าพรรคปร...",
186
+ "document_title": "หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร",
187
+ "document_url": "\"https://th.wikipedia.org/wiki/%E0%B8%AB%E0%B8%A1%E0%B9%88%E0%B8%AD%E0%B8%A1%E0%B8%A3%E0%B8%B2%E0%B8%8A%E0%B8%A7%E0%B8%87%E0%B8%...",
188
+ "language": "thai",
189
+ "passage_answer_candidates": "{\"plaintext_end_byte\": [494, 1779, 2931, 3904, 4506, 5588, 6383, 7122, 8224, 9375, 10473, 12563, 15134, 17765, 19863, 21902, 229...",
190
+ "question_text": "\"หม่อมราชวงศ์สุขุมพันธุ์ บริพัตร เรียนจบจากที่ไหน ?\"..."
191
+ }
192
+ ```
193
+
194
+ #### secondary_task
195
+
196
+ - **Size of downloaded dataset files:** 1.95 GB
197
+ - **Size of the generated dataset:** 58.03 MB
198
+ - **Total amount of disk used:** 2.01 GB
199
+
200
+ An example of 'validation' looks as follows.
201
+ ```
202
+ This example was too long and was cropped:
203
+
204
+ {
205
+ "answers": {
206
+ "answer_start": [394],
207
+ "text": ["بطولتين"]
208
+ },
209
+ "context": "\"أقيمت البطولة 21 مرة، شارك في النهائيات 78 دولة، وعدد الفرق التي فازت بالبطولة حتى الآن 8 فرق، ويعد المنتخب البرازيلي الأكثر تت...",
210
+ "id": "arabic-2387335860751143628-1",
211
+ "question": "\"كم عدد مرات فوز الأوروغواي ببطولة كاس العالم لكرو القدم؟\"...",
212
+ "title": "قائمة نهائيات كأس العالم"
213
+ }
214
+ ```
215
+
216
+ ### Data Fields
217
+
218
+ The data fields are the same among all splits.
219
+
220
+ #### primary_task
221
+ - `passage_answer_candidates`: a dictionary feature containing:
222
+ - `plaintext_start_byte`: a `int32` feature.
223
+ - `plaintext_end_byte`: a `int32` feature.
224
+ - `question_text`: a `string` feature.
225
+ - `document_title`: a `string` feature.
226
+ - `language`: a `string` feature.
227
+ - `annotations`: a dictionary feature containing:
228
+ - `passage_answer_candidate_index`: a `int32` feature.
229
+ - `minimal_answers_start_byte`: a `int32` feature.
230
+ - `minimal_answers_end_byte`: a `int32` feature.
231
+ - `yes_no_answer`: a `string` feature.
232
+ - `document_plaintext`: a `string` feature.
233
+ - `document_url`: a `string` feature.
234
+
235
+ #### secondary_task
236
+ - `id`: a `string` feature.
237
+ - `title`: a `string` feature.
238
+ - `context`: a `string` feature.
239
+ - `question`: a `string` feature.
240
+ - `answers`: a dictionary feature containing:
241
+ - `text`: a `string` feature.
242
+ - `answer_start`: a `int32` feature.
243
+
244
+ ### Data Splits
245
+
246
+ | name | train | validation |
247
+ | -------------- | -----: | ---------: |
248
+ | primary_task | 166916 | 18670 |
249
+ | secondary_task | 49881 | 5077 |
250
+
251
+ ## Dataset Creation
252
+
253
+ ### Curation Rationale
254
+
255
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
256
+
257
+ ### Source Data
258
+
259
+ #### Initial Data Collection and Normalization
260
+
261
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
262
+
263
+ #### Who are the source language producers?
264
+
265
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
266
+
267
+ ### Annotations
268
+
269
+ #### Annotation process
270
+
271
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
272
+
273
+ #### Who are the annotators?
274
+
275
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
276
+
277
+ ### Personal and Sensitive Information
278
+
279
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
280
+
281
+ ## Considerations for Using the Data
282
+
283
+ ### Social Impact of Dataset
284
+
285
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
286
+
287
+ ### Discussion of Biases
288
+
289
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
290
+
291
+ ### Other Known Limitations
292
+
293
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
294
+
295
+ ## Additional Information
296
+
297
+ ### Dataset Curators
298
+
299
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
300
+
301
+ ### Licensing Information
302
+
303
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
304
+
305
+ ### Citation Information
306
+
307
+ ```
308
+ @article{tydiqa,
309
+ title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
310
+ author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
311
+ year = {2020},
312
+ journal = {Transactions of the Association for Computational Linguistics}
313
+ }
314
+
315
+ ```
316
+
317
+
318
+ ### Contributions
319
+
320
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.