Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Commit
•
8d6693f
1
Parent(s):
0a3a8b7
Update dataset card
Browse filesUpdate dataset card.
README.md
CHANGED
@@ -6,8 +6,7 @@ language_creators:
|
|
6 |
- found
|
7 |
language:
|
8 |
- en
|
9 |
-
license:
|
10 |
-
- cc-by-4.0
|
11 |
multilinguality:
|
12 |
- monolingual
|
13 |
size_categories:
|
@@ -72,7 +71,7 @@ train-eval-index:
|
|
72 |
name: SQuAD
|
73 |
---
|
74 |
|
75 |
-
# Dataset Card for
|
76 |
|
77 |
## Table of Contents
|
78 |
- [Dataset Card for "squad"](#dataset-card-for-squad)
|
@@ -108,25 +107,24 @@ train-eval-index:
|
|
108 |
|
109 |
## Dataset Description
|
110 |
|
111 |
-
- **Homepage:**
|
112 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
113 |
-
- **Paper:**
|
114 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
115 |
-
- **Size of downloaded dataset files:** 35.14 MB
|
116 |
-
- **Size of the generated dataset:** 89.92 MB
|
117 |
-
- **Total amount of disk used:** 125.06 MB
|
118 |
|
119 |
### Dataset Summary
|
120 |
|
121 |
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
|
122 |
|
|
|
|
|
123 |
### Supported Tasks and Leaderboards
|
124 |
|
125 |
-
|
126 |
|
127 |
### Languages
|
128 |
|
129 |
-
|
130 |
|
131 |
## Dataset Structure
|
132 |
|
@@ -223,23 +221,32 @@ The data fields are the same among all splits.
|
|
223 |
|
224 |
### Licensing Information
|
225 |
|
226 |
-
|
227 |
|
228 |
### Citation Information
|
229 |
|
230 |
```
|
231 |
-
@
|
232 |
-
|
233 |
-
|
234 |
-
|
235 |
-
|
236 |
-
|
237 |
-
|
238 |
-
|
239 |
-
|
240 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
241 |
}
|
242 |
-
|
243 |
```
|
244 |
|
245 |
|
|
|
6 |
- found
|
7 |
language:
|
8 |
- en
|
9 |
+
license: cc-by-sa-4.0
|
|
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
size_categories:
|
|
|
71 |
name: SQuAD
|
72 |
---
|
73 |
|
74 |
+
# Dataset Card for SQuAD
|
75 |
|
76 |
## Table of Contents
|
77 |
- [Dataset Card for "squad"](#dataset-card-for-squad)
|
|
|
107 |
|
108 |
## Dataset Description
|
109 |
|
110 |
+
- **Homepage:** https://rajpurkar.github.io/SQuAD-explorer/
|
111 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
112 |
+
- **Paper:** https://arxiv.org/abs/1606.05250
|
113 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
|
|
|
|
114 |
|
115 |
### Dataset Summary
|
116 |
|
117 |
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
|
118 |
|
119 |
+
SQuAD 1.1 contains 100,000+ question-answer pairs on 500+ articles.
|
120 |
+
|
121 |
### Supported Tasks and Leaderboards
|
122 |
|
123 |
+
Question Answering.
|
124 |
|
125 |
### Languages
|
126 |
|
127 |
+
English (`en`).
|
128 |
|
129 |
## Dataset Structure
|
130 |
|
|
|
221 |
|
222 |
### Licensing Information
|
223 |
|
224 |
+
The dataset is distributed under the CC BY-SA 4.0 license.
|
225 |
|
226 |
### Citation Information
|
227 |
|
228 |
```
|
229 |
+
@inproceedings{rajpurkar-etal-2016-squad,
|
230 |
+
title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text",
|
231 |
+
author = "Rajpurkar, Pranav and
|
232 |
+
Zhang, Jian and
|
233 |
+
Lopyrev, Konstantin and
|
234 |
+
Liang, Percy",
|
235 |
+
editor = "Su, Jian and
|
236 |
+
Duh, Kevin and
|
237 |
+
Carreras, Xavier",
|
238 |
+
booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
|
239 |
+
month = nov,
|
240 |
+
year = "2016",
|
241 |
+
address = "Austin, Texas",
|
242 |
+
publisher = "Association for Computational Linguistics",
|
243 |
+
url = "https://aclanthology.org/D16-1264",
|
244 |
+
doi = "10.18653/v1/D16-1264",
|
245 |
+
pages = "2383--2392",
|
246 |
+
eprint={1606.05250},
|
247 |
+
archivePrefix={arXiv},
|
248 |
+
primaryClass={cs.CL},
|
249 |
}
|
|
|
250 |
```
|
251 |
|
252 |
|