Datasets:

Languages:
Turkish
License:
File size: 9,894 Bytes
c4a2037
d85a449
 
76cd022
d85a449
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c4a2037
4fe9873
d85a449
 
 
4fe9873
 
d85a449
 
 
 
 
 
 
 
 
 
 
4fe9873
 
 
 
 
d85a449
 
 
 
 
 
 
 
 
 
 
b1dcfcc
d85a449
b1dcfcc
d85a449
 
b1dcfcc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d85a449
 
 
 
b1dcfcc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d85a449
 
 
b1dcfcc
d85a449
b1dcfcc
d85a449
b1dcfcc
 
 
 
 
 
 
 
d85a449
 
 
 
 
 
 
 
 
 
b1dcfcc
 
 
 
 
 
 
 
 
d85a449
 
 
 
 
b1dcfcc
d85a449
 
b1dcfcc
d85a449
b1dcfcc
d85a449
b1dcfcc
d85a449
76cd022
d85a449
b1dcfcc
 
 
d85a449
b1dcfcc
 
 
 
d85a449
b1dcfcc
c240317
d85a449
b1dcfcc
d85a449
b1dcfcc
d85a449
 
b1dcfcc
4fe9873
d85a449
e97f10b
 
 
d85a449
e97f10b
 
 
 
 
 
 
 
4fe9873
 
 
b1dcfcc
 
76cd022
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
---
language:
- tr
license: cc-by-nc-nd-4.0
annotations_creators:
- machine-generated
language_creators:
- machine-generated
multilinguality:
- monolingual
pretty_name: SQuAD-TR
size_categories:
- 100K<n<1M
source_datasets:
- extended|squad
task_categories:
- question-answering
task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: squad-tr
dataset_info:
- config_name: default
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    sequence:
    - name: text
      dtype: string
    - name: answer_start
      dtype: int32
  splits:
  - name: train
    num_bytes: 95795325
    num_examples: 104791
  - name: validation
    num_bytes: 8287109
    num_examples: 8291
  download_size: 9425486
  dataset_size: 104082434
- config_name: excluded
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    sequence:
    - name: text
      dtype: string
  splits:
  - name: train
    num_bytes: 24130226
    num_examples: 25528
  - name: validation
    num_bytes: 3427513
    num_examples: 3582
  download_size: 5270628
  dataset_size: 27557739
- config_name: openqa
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    sequence:
    - name: text
      dtype: string
  splits:
  - name: train
    num_bytes: 119261215
    num_examples: 130319
  - name: validation
    num_bytes: 11649046
    num_examples: 11873
  download_size: 14696114
  dataset_size: 130910261
---

# Dataset Card for SQuAD-TR

## Table of Contents
- [SQuAD-TR](#dataset-summary)
  - [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## 📜 SQuAD-TR

SQuAD-TR is a machine translated version of the original [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset into Turkish, using [Amazon Translate](https://aws.amazon.com/translate/).

### Dataset Description

- **Repository:** [SQuAD-TR GitHub Repository](https://github.com/boun-tabi/SQuAD2.0-TR)
- **Paper:** Building Efficient and Effective OpenQA Systems for Low-Resource Languages
- **Point of Contact:** [Emrah Budur](mailto:emrah.budur@boun.edu.tr)


## Dataset Structure

### Data Instances

Our data instances follow that of the original SQuAD2.0 dataset.
Shared below is an example instance from the default train dataset🍫

Example from SQuAD2.0:
```
{
    "context": "Chocolate is New York City's leading specialty-food export, with up to US$234 million worth of exports each year. Entrepreneurs were forming a \"Chocolate District\" in Brooklyn as of 2014, while Godiva, one of the world's largest chocolatiers, continues to be headquartered in Manhattan.",
  "qas": [
    {
     "id": "56cff221234ae51400d9c140",
      "question": "Which one of the world's largest chocolate makers is stationed in Manhattan?",
      "is_impossible": false,
      "answers": [
        {
          "text": "Godiva",
          "answer_start": 194
        }
      ],
    }
  ]
}
```

Turkish translation:

```
{
    "context": "Çikolata, her yıl 234 milyon ABD dolarına varan ihracatı ile New York'un önde gelen özel gıda ihracatıdır. Girişimciler 2014 yılı itibariyle Brooklyn'de bir “Çikolata Bölgesi” kurarken, dünyanın en büyük çikolatacılarından biri olan Godiva merkezi Manhattan'da olmaya devam ediyor.",
    "qas": [
        {
            "id": "56cff221234ae51400d9c140",
            "question": "Dünyanın en büyük çikolata üreticilerinden hangisi Manhattan'da konuşlandırılmış?",
            "is_impossible": false,
            "answers": [
                {
                    "text": "Godiva",
                    "answer_start": 233
                }
            ]
        }
    ]
}

```


### Data Fields

Below if the data model of the splits.  

- `id`: a string feature.
- `title`: a string feature.
- `context`: a string feature.
- `question`: a string feature.
- `answers`: a dictionary feature containing:
  - `text`: a string feature.
  - `*answer_start`: a int32 feature.

*Notes:
- The training split we get by `openqa` parameter will not include `answer_start` field as it is not required for the training phase of the OpenQA formulation.
- The split we get by `excluded` parameter is also missing `answer_start` field as we could not identify the starting index of the answers for these examples from the context after the translation.

## Dataset Creation

We translated the titles, context paragraphs, questions and answer spans from the original SQuAD2.0 dataset using [Amazon Translate](https://aws.amazon.com/translate/) - requiring us to remap the starting positions of the answer spans, since their positions were changed due to the automatic translation.

We performed an automatic post-processing step to populate the start positions for the answer spans. To do so, we have first looked at whether there was an exact match for the translated answer span in the translated context paragraph and if so, we kept the answer text along with this start position found.
If no exact match was found, we looked for approximate matches using a character-level edit distance algorithm.

We have excluded the question-answer pairs from the original dataset where neither an exact nor an approximate match was found in the translated version. Our `default` configuration corresponds to this version. 

We have put the excluded examples in our `excluded` configuration. 

As a result, the datasets in these two configurations are mutually exclusive.  Below are the details for the corresponding dataset splits.

### Data Splits

The SQuAD2.0 TR dataset has 2 splits: _train_ and _validation_. Below are the statistics for the most recent version of the dataset in the default configuration.

| Split      | Articles | Paragraphs | Answerable Questions | Unanswerable Questions | Total   |
| ---------- | -------- | ---------- | -------------------- | ---------------------- | ------- |
| train      | 442      | 18776      | 61293                | 43498                  | 104,791 |
| validation | 35       | 1204       | 2346                 | 5945                   | 8291    |



| Split   | Articles | Paragraphs | Questions wo/ answers | Total   |
| ------- | -------- | ---------- | --------------------- | ------- |
| train-excluded   | 440        | 13490          | 25528                 | 25528   |
| dev-excluded     | 35        | 924          | 3582                     | 3582       |


In addition to the default configuration, we also a different view of train split can be obtained specifically for openqa setting by combining the `train` and `train-excluded` splits. In this view, we only have question-answer pairs (without `answer_start` field) along with their contexts.  

| Split      | Articles | Paragraphs | Questions w/ answers |  Total   |
| ---------- | -------- | ---------- | -------------------- |  ------- |
| openqa     | 442      | 18776      | 86821                |  86821   |

More information on our translation strategy can be found in our linked paper.


### Source Data

This dataset used the original SQuAD2.0 dataset as its source data.

### Licensing Information

The SQuAD-TR is released under [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0).  

#### 🤗 HuggingFace datasets
```py
from datasets import load_dataset

squad_tr_standard_qa = load_dataset("[TBD]", "default")
squad_tr_open_qa = load_dataset("[TBD]", "openqa")
squad_tr_excluded = load_dataset("[TBD]", "excluded")
xquad_tr = load_dataset("xquad", "xquad.tr") # External resource

```
* Demo application 👉 [Google Colab](https://colab.research.google.com/drive/1QVD0c1kFfOUc1sRGKDHWeF_HgNEineRt?usp=sharing). 

### 🔬 Reproducibility 

You can find all code, models and samples of the input data here [link TBD].  Please feel free to reach out to us if you have any specific questions. 


  
### ✍️ Citation

>[Emrah Budur](https://scholar.google.com/citations?user=zSNd03UAAAAJ), [Rıza Özçelik](https://www.cmpe.boun.edu.tr/~riza.ozcelik), [Dilara Soylu](https://scholar.google.com/citations?user=_NC2jJEAAAAJ), [Omar Khattab](https://omarkhattab.com), [Tunga Güngör](https://www.cmpe.boun.edu.tr/~gungort/)  and [Christopher Potts](https://web.stanford.edu/~cgpotts).  
Building Efficient and Effective OpenQA Systems for Low-Resource Languages. 2024.

```
@misc{budur-etal-2024-squad-tr,
      title={Building Efficient and Effective OpenQA Systems for Low-Resource Languages}, 
      author={Emrah Budur and R{\i}za \"{O}z\c{c}elik and Dilara Soylu and Omar Khattab and Tunga G\"{u}ng\"{o}r and Christopher Potts},
      year={2024},
      eprint={TBD},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
 
## ❤ Acknowledgment
This research was supported by the _[AWS Cloud Credits for Research Program](https://aws.amazon.com/government-education/research-and-technical-computing/cloud-credit-for-research/) (formerly AWS Research Grants)_.

We thank Alara Dirik, Almira Bağlar, Berfu Büyüköz, Berna Erden, Gökçe Uludoğan,  Havva Yüksel, Melih Barsbey, Murat Karademir, Selen Parlar, Tuğçe Ulutuğ, Utku Yavuz for their support on our application for AWS Cloud Credits for Research Program and Fatih Mehmet Güler for the valuable advice, discussion and insightful comments.