Datasets:

Languages:
Turkish
License:
e-budur commited on
Commit
b1dcfcc
1 Parent(s): 4fe9873

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -41
README.md CHANGED
@@ -128,38 +128,79 @@ SQuAD-TR is a machine translated version of the original [SQuAD2.0](https://rajp
128
  ### Data Instances
129
 
130
  Our data instances follow that of the original SQuAD2.0 dataset.
131
- Shared below is an example instance from the default train dataset.
132
 
 
133
  ```
134
  {
135
- "id": "56be85543aeaaa14008c9063",
136
- "title": "Beyonce",
137
- "context": "Beyoncé Giselle Knowles-Carter (d. 4 Eylül 1981), ABD'li şarkıcı, söz yazarı, prodüktör ve aktris. Houston, Teksas'ta doğup büyüdü, çocukken çeşitli şarkı ve dans yarışmalarında sahne aldı ve 1990'ların sonlarında R&B kız grubu Destiny's Child'ın solisti olarak ün kazandı. Babası Mathew Knowles tarafından yönetilen grup tüm zamanların en çok satan kız gruplarından biri oldu. Beyoncé'nin ilk albümü Dangerously in Love'ın (2003) yayınlanmasını izlemiştir ve beş Grammy Ödülü kazanmış ve Billboard Hot 100 bir numaralı single'ları “Crazy in Love” ve “Baby Boy\"un yer aldığı Beyoncé'nin ilk albümü Dangerously in Love (2003) yayınlandı.",
138
- "question": "Beyonce ne zaman popüler olmaya başladı?",
139
- "answers": [
140
- {
141
- "text": "1990'ların sonlarında",
142
- "answer_start": 192
143
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
  ]
145
  }
146
 
147
  ```
148
- Notes:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
149
  - The training split we get by `openqa` parameter will not include `answer_start` field as it is not required for the training phase of the OpenQA formulation.
150
  - The split we get by `excluded` parameter is also missing `answer_start` field as we could not identify the starting index of the answers for these examples from the context after the translation.
151
 
152
- ### Data Fields
153
 
154
- The data fields with `*` prefix are the same for all splits. The splits we get by `openqa` and `excluded` parameters are missing `answer_start` field.
155
 
156
- - `*id`: a string feature.
157
- - `*title`: a string feature.
158
- - `*context`: a string feature.
159
- - `*question`: a string feature.
160
- - `*answers`: a dictionary feature containing:
161
- - `*text`: a string feature.
162
- - `answer_start`: a int32 feature.
 
163
 
164
  ### Data Splits
165
 
@@ -170,43 +211,49 @@ The SQuAD2.0 TR dataset has 2 splits: _train_ and _validation_. Below are the st
170
  | train | 442 | 18776 | 61293 | 43498 | 104,791 |
171
  | validation | 35 | 1204 | 2346 | 5945 | 8291 |
172
 
173
- In addition to the default configuration, we also include a different view of train split specifically for openqa setting. In this setting, we only provide question-answer pairs along with their contexts.
 
 
 
 
 
 
 
 
174
 
175
  | Split | Articles | Paragraphs | Questions w/ answers | Total |
176
  | ---------- | -------- | ---------- | -------------------- | ------- |
177
  | openqa | 442 | 18776 | 86821 | 86821 |
178
 
 
179
 
180
 
181
- ## Dataset Creation
182
 
183
- ### Curation Rationale
184
 
185
- We translated the titles, context paragraphs, questions and answer spans from the original SQuAD2.0 dataset using "Amazon Translate" - requiring us to remap the starting positions of the answer spans, since their positions were changed due to the automatic translation.
186
- We performed an automatic post-processing step to populate the start positions for the answer spans.
187
- To do so, we have first looked at whether there was an exact match for the translated answer span in the translated context paragraph and if so, we kept the answer text along with this start position found.
188
- If no exact match was found, we looked for approximate matches using a character-level edit distance algorithm.
189
- We have excluded the question-answer pairs from the original dataset where neither an exact nor an approximate match was found in the translated version.
190
- Our "default" configuration corresponds to this version.
191
- We have put the "excluded" examples in our "excluded" configuration.
192
- As a result, the datasets in these two configurations are mutually exclusive.
193
 
194
- | Split | Articles | Paragraphs | Questions wo/ answers | Total |
195
- | ------- | -------- | ---------- | --------------------- | ------- |
196
- | train | ? | ? | 25528 | 25528 |
197
- | dev | ? | ? | ? | ? |
198
 
 
 
 
199
 
200
- More information on our translation strategy can be found in our linked paper.
 
 
 
201
 
202
- ### Source Data
 
203
 
204
- This dataset used the original SQuAD2.0 dataset as its source data.
205
 
206
- ### 🏷 Licensing Information
207
 
208
- The SQuAD-TR is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) in accordance with the license of SQuAD-2.0.
209
 
 
210
  ### ✍️ Citation
211
 
212
  ```
@@ -214,4 +261,6 @@ The SQuAD-TR is released under [CC BY-SA 4.0](https://creativecommons.org/licens
214
  ```
215
 
216
  ## ❤ Acknowledgment
217
- This research was supported by the _[AWS Cloud Credits for Research Program](https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/) (formerly AWS Research Grants)_.
 
 
 
128
  ### Data Instances
129
 
130
  Our data instances follow that of the original SQuAD2.0 dataset.
131
+ Shared below is an example instance from the default train dataset🍫
132
 
133
+ Example from SQuAD2.0:
134
  ```
135
  {
136
+ "context": "Chocolate is New York City's leading specialty-food export, with up to US$234 million worth of exports each year. Entrepreneurs were forming a \"Chocolate District\" in Brooklyn as of 2014, while Godiva, one of the world's largest chocolatiers, continues to be headquartered in Manhattan.",
137
+ "qas": [
138
+ {
139
+ "id": "56cff221234ae51400d9c140",
140
+ "question": "Which one of the world's largest chocolate makers is stationed in Manhattan?",
141
+ "is_impossible": false,
142
+ "answers": [
143
+ {
144
+ "text": "Godiva",
145
+ "answer_start": 194
146
+ }
147
+ ],
148
+ }
149
+ ]
150
+ }
151
+ ```
152
+
153
+ Turkish translation:
154
+
155
+ ```
156
+ {
157
+ "context": "Çikolata, her yıl 234 milyon ABD dolarına varan ihracatı ile New York'un önde gelen özel gıda ihracatıdır. Girişimciler 2014 yılı itibariyle Brooklyn'de bir “Çikolata Bölgesi” kurarken, dünyanın en büyük çikolatacılarından biri olan Godiva merkezi Manhattan'da olmaya devam ediyor.",
158
+ "qas": [
159
+ {
160
+ "id": "56cff221234ae51400d9c140",
161
+ "question": "Dünyanın en büyük çikolata üreticilerinden hangisi Manhattan'da konuşlandırılmış?",
162
+ "is_impossible": false,
163
+ "answers": [
164
+ {
165
+ "text": "Godiva",
166
+ "answer_start": 233
167
+ }
168
+ ]
169
+ }
170
  ]
171
  }
172
 
173
  ```
174
+
175
+
176
+ ### Data Fields
177
+
178
+ Below if the data model of the splits.
179
+
180
+ - `id`: a string feature.
181
+ - `title`: a string feature.
182
+ - `context`: a string feature.
183
+ - `question`: a string feature.
184
+ - `answers`: a dictionary feature containing:
185
+ - `text`: a string feature.
186
+ - `*answer_start`: a int32 feature.
187
+
188
+ *Notes:
189
  - The training split we get by `openqa` parameter will not include `answer_start` field as it is not required for the training phase of the OpenQA formulation.
190
  - The split we get by `excluded` parameter is also missing `answer_start` field as we could not identify the starting index of the answers for these examples from the context after the translation.
191
 
192
+ ## Dataset Creation
193
 
194
+ We translated the titles, context paragraphs, questions and answer spans from the original SQuAD2.0 dataset using [Amazon Translate](https://aws.amazon.com/translate/) - requiring us to remap the starting positions of the answer spans, since their positions were changed due to the automatic translation.
195
 
196
+ We performed an automatic post-processing step to populate the start positions for the answer spans. To do so, we have first looked at whether there was an exact match for the translated answer span in the translated context paragraph and if so, we kept the answer text along with this start position found.
197
+ If no exact match was found, we looked for approximate matches using a character-level edit distance algorithm.
198
+
199
+ We have excluded the question-answer pairs from the original dataset where neither an exact nor an approximate match was found in the translated version. Our `default` configuration corresponds to this version.
200
+
201
+ We have put the excluded examples in our `excluded` configuration.
202
+
203
+ As a result, the datasets in these two configurations are mutually exclusive. Below are the details for the corresponding dataset splits.
204
 
205
  ### Data Splits
206
 
 
211
  | train | 442 | 18776 | 61293 | 43498 | 104,791 |
212
  | validation | 35 | 1204 | 2346 | 5945 | 8291 |
213
 
214
+
215
+
216
+ | Split | Articles | Paragraphs | Questions wo/ answers | Total |
217
+ | ------- | -------- | ---------- | --------------------- | ------- |
218
+ | train-excluded | 440 | 13490 | 25528 | 25528 |
219
+ | dev-excluded | 35 | 924 | 3582 | 3582 |
220
+
221
+
222
+ In addition to the default configuration, we also a different view of train split can be obtained specifically for openqa setting by combining the `train` and `train-excluded` splits. In this view, we only have question-answer pairs (without `answer_start` field) along with their contexts.
223
 
224
  | Split | Articles | Paragraphs | Questions w/ answers | Total |
225
  | ---------- | -------- | ---------- | -------------------- | ------- |
226
  | openqa | 442 | 18776 | 86821 | 86821 |
227
 
228
+ More information on our translation strategy can be found in our linked paper.
229
 
230
 
231
+ ### Source Data
232
 
233
+ This dataset used the original SQuAD2.0 dataset as its source data.
234
 
235
+ ### Licensing Information
 
 
 
 
 
 
 
236
 
237
+ The SQuAD-TR is released under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) in accordance with the license of SQuAD-2.0.
 
 
 
238
 
239
+ #### 🤗 HuggingFace datasets
240
+ ```py
241
+ from datasets import load_dataset
242
 
243
+ squad_tr_standard_qa = load_dataset("[TBD]", "default")
244
+ squad_tr_open_qa = load_dataset("[TBD]", "openqa")
245
+ squad_tr_excluded = load_dataset("[TBD]", "excluded")
246
+ xquad_tr = load_dataset("xquad", "xquad.tr") # External resource
247
 
248
+ ```
249
+ * Demo application 👉 [Link TBD].
250
 
251
+ ### 🔬 Reproducibility
252
 
253
+ You can find all code, models and samples of the input data here [link TBD]. Please feel free to reach out to us if you have any specific questions.
254
 
 
255
 
256
+
257
  ### ✍️ Citation
258
 
259
  ```
 
261
  ```
262
 
263
  ## ❤ Acknowledgment
264
+ This research was supported by the _[AWS Cloud Credits for Research Program](https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/) (formerly AWS Research Grants)_.
265
+
266
+ We thank Alara Dirik, Almira Bağlar, Berfu Büyüköz, Berna Erden, Gökçe Uludoğan, Havva Yüksel, Melih Barsbey, Murat Karademir, Selen Parlar, Tuğçe Ulutuğ, Utku Yavuz for their support on our application for AWS Cloud Credits for Research Program and Fatih Mehmet Güler for the valuable advice, discussion and insightful comments.