Datasets:
ltg
/

Modalities:
Text
Formats:
parquet
Libraries:
Datasets
pandas
License:
File size: 7,974 Bytes
a3b2426
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be5b020
 
 
 
 
 
 
a3b2426
c900d71
 
 
 
 
 
 
 
 
 
 
 
ea0acfa
c900d71
 
75456a9
7d1e819
a00e7d2
c900d71
 
 
a51dfa2
c900d71
 
 
4c2910c
 
7d1e819
c900d71
 
 
 
 
7d1e819
c900d71
 
 
 
 
be5b020
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c900d71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32dffda
c900d71
a51dfa2
c900d71
 
 
 
 
 
 
 
 
 
ea0acfa
 
 
 
 
 
 
 
 
 
 
 
 
c900d71
be5b020
c900d71
ea0acfa
 
 
be5b020
ea0acfa
 
be5b020
ea0acfa
 
 
 
c900d71
 
 
 
 
 
be5b020
c900d71
 
 
 
a51dfa2
c900d71
 
 
 
be5b020
c900d71
 
 
 
 
 
 
 
be5b020
c900d71
 
 
 
 
a51dfa2
c900d71
ab4eed5
c900d71
 
 
be5b020
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: answers
    sequence:
    - name: text
      dtype: string
    - name: answer_start
      dtype: int32
  splits:
  - name: train
    num_bytes: 8739891
    num_examples: 3808
  - name: validation
    num_bytes: 1081237
    num_examples: 472
  - name: test
    num_bytes: 1096650
    num_examples: 472
  download_size: 4188322
  dataset_size: 10917778
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
license: cc0-1.0
task_categories:
- question-answering
language:
- nb
size_categories:
- 1K<n<10K
---

# Dataset Card for Dataset Name

<!-- Provide a quick summary of the dataset. -->

NorQuAD is the first Norwegian question answering dataset for machine reading comprehension, created from scratch in Norwegian. The dataset consists of 4,752 manually created question-answer pairs.

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->
The dataset provides Norwegian question-answer pairs taken from two data sources: Wikipedia and news.


- **Curated by:** Human annotators.
- **Funded by:** The UiO Teksthub initiative
- **Shared by:** The [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/), University of Oslo
- **Language(s) (NLP):** Norwegian Bokmål
- **License:** CC0-1.0

### Dataset Sources

<!-- Provide the basic links for the dataset. -->

- **Repository:** [https://github.com/ltgoslo/NorQuAD](https://github.com/ltgoslo/NorQuAD)
- **Paper:** [Ivanova et. al., 2023](https://aclanthology.org/2023.nodalida-1.17.pdf)


## Uses

<!-- Address questions around how the dataset is intended to be used. -->

The dataset is intended to be used for NLP model development and benchmarking.

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

**Data Instances**

```
{
    "id": "1",
    "context": "This is a test context",
    "question": "This is a question",
    "answers": {
        "answer_start": [1],
        "text": ["This is an answer"]
    },
}
```

**Data Fields**

```
  id: a string feature.
  context: a string feature.
  question: a string feature.
  answers: a dictionary feature containing:
    text: a string feature.
    answer_start: a int32 feature.
```

**Dataset Splits**

NorQuAD consists of training (3808 examples), validation (472), and public test (472) sets.

## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->
Machine reading comprehension is one of the key problems in natural language understanding. The question answering (QA) task requires a machine to read and comprehend a given text passage, and then answer questions about the passage. There is progress in reading comprehension and question answering for English and a few other languages. We would like to fill in the lack of annotated data for question answering for Norwegian. This project aims at compiling human-created training, validation, and test sets for the task for Norwegian.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

**Wikipedia**:  872 articles were sampled from Norwegian Bokmal Wikipedia. 

**News**: For the news category, articles were sampled from Norsk Aviskorpus, an openly available dataset of Norwegian news.

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

**Wikipedia**:In order to include high-quality articles, 130 articles from the
‘Recommended‘ section and 139 from the ‘Featured‘ section were sampled. The remaining 603 articles were randomly sampled from the remaining Wikipedia
corpus. From the sampled articles, we chose only the “Introduction“ sections to be selected as passages for annotation.

**News**: 1000 articles were sampled from the Norsk Aviskorpus (NAK)—a collection of Norwegian news texts
for the year 2019. As was the case with Wikipedia articles, we chose
only news articles which consisted of at least 300
words.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

The data is sourced from Norwegian Wikipedia dumps as well as the openly available [Norwegian News Corpus](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/), available from the Språkbanken repository.

### Annotations

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
In total, the annotators processed 353 passages from Wikipedia and 403 passages from news, creating a
total of 4,752 question-answer pairs.


#### Annotation process

<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

The dataset was created in three stages: (i) selecting text passages, (ii) collecting question-answer
pairs for those passages, and (iii) human validation of (a subset of) created question-answer pairs. 

#### Text selection

Data was selected from openly available sources from Wikipedia and News data, as described above.


#### Question-Answer Pairs

The annotators were provided with a set of initial instructions, largely based on those for similar datasets, in particular, the English SQuAD
dataset (Rajpurkar et al., 2016) and the GermanQuAD data (Moller et al., 2021). These instructions were subsequently refined following regular
meetings with the annotation team.
The annotation guidelines provided to the annotators are available (here)[https://github.com/ltgoslo/NorQuAD/blob/main/guidelines.md].
For annotation, we used the Haystack annotation tool, which was designed for QA collection.


#### Human validation

In a separate stage, the annotators validated a subset of the NorQuAD dataset. In this phase, each
annotator replied to the questions created by the
other annotator. We chose the question-answer
pairs for validation at random. In total, 1378 questions from the set of question-answer pairs were
answered by validators.



#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->

Two students of the Master’s program in Natural Language Processing at the University of Oslo,
both native Norwegian speakers, created question-answer pairs from the collected passages. Each
student received a separate set of passages for annotation. The students received financial remuneration for their efforts and are co-authors of the
paper describing the resource. 



## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**
```
@inproceedings{
ivanova2023norquad,
title={NorQu{AD}: Norwegian Question Answering Dataset},
author={Sardana Ivanova and Fredrik Aas Andreassen and Matias Jentoft and Sondre Wold and Lilja {\O}vrelid},
booktitle={The 24th Nordic Conference on Computational Linguistics},
year={2023},
url={https://aclanthology.org/2023.nodalida-1.17.pdf}
}
```
**APA:**

[More Information Needed]


## Dataset Card Authors

Vladislav Mikhailov and Lilja Øvrelid

## Dataset Card Contact

vladism@ifi.uio.no and liljao@ifi.uio.no