Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
Norwegian Bokmål
Size:
1K - 10K
License:
language: | |
- nb | |
dataset_info: | |
features: | |
- name: id | |
dtype: string | |
- name: context | |
dtype: string | |
- name: question | |
dtype: string | |
- name: answers | |
sequence: | |
- name: text | |
dtype: string | |
- name: answer_start | |
dtype: int32 | |
splits: | |
- name: train | |
num_bytes: 8739891 | |
num_examples: 3808 | |
- name: validation | |
num_bytes: 1081237 | |
num_examples: 472 | |
- name: test | |
num_bytes: 1096650 | |
num_examples: 472 | |
download_size: 4188322 | |
dataset_size: 10917778 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: validation | |
path: data/validation-* | |
- split: test | |
path: data/test-* | |
# Dataset Card for Dataset Name | |
<!-- Provide a quick summary of the dataset. --> | |
NorQuAD is the first Norwegian question answering dataset for machine reading comprehension, created from scratch in Norwegian. The dataset consists of 4,752 manually created question-answer pairs. | |
## Dataset Details | |
### Dataset Description | |
<!-- Provide a longer summary of what this dataset is. --> | |
The dataset provides Norwegian question-answer pairs taken from two data sources: Wikipedia and news. | |
- **Curated by:** Human annotators. | |
- **Funded by:** The UiO Teksthub initiative | |
- **Shared by:** The [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/), University of Oslo | |
- **Language(s) (NLP):** Norwegian Bokmål | |
- **License:** CC0-1.0 | |
### Dataset Sources | |
<!-- Provide the basic links for the dataset. --> | |
- **Repository:** [https://github.com/ltgoslo/NorQuAD](https://github.com/ltgoslo/NorQuAD) | |
- **Paper:** [Ivanova et. al., 2023](https://aclanthology.org/2023.nodalida-1.17.pdf) | |
## Uses | |
<!-- Address questions around how the dataset is intended to be used. --> | |
The dataset is intended to be used for NLP model development and benchmarking. | |
## Dataset Structure | |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> | |
[More Information Needed] | |
## Dataset Creation | |
### Curation Rationale | |
<!-- Motivation for the creation of this dataset. --> | |
Machine reading comprehension is one of the key problems in natural language understanding. The question answering (QA) task requires a machine to read and comprehend a given text passage, and then answer questions about the passage. There is progress in reading comprehension and question answering for English and a few other languages. We would like to fill in the lack of annotated data for question answering for Norwegian. This project aims at compiling human-created training, validation, and test sets for the task for Norwegian. | |
### Source Data | |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> | |
**Wikipedia**: 872 articles were sampled from Norwegian Bokmal Wikipedia. | |
**News**: For the news category, articles were sampled from Norsk Aviskorpus, an openly available dataset of Norwegian news. | |
#### Data Collection and Processing | |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> | |
**Wikipedia**:In order to include high-quality articles, 130 articles from the | |
‘Recommended‘ section and 139 from the ‘Featured‘ section were sampled. The remaining 603 articles were randomly sampled from the remaining Wikipedia | |
corpus. From the sampled articles, we chose only the “Introduction“ sections to be selected as passages for annotation. | |
**News**: 1000 articles were sampled from the Norsk Aviskorpus (NAK)—a collection of Norwegian news texts | |
for the year 2019. As was the case with Wikipedia articles, we chose | |
only news articles which consisted of at least 300 | |
words. | |
#### Who are the source data producers? | |
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> | |
The data is sourced from Norwegian Wikipedia dumps as well as the openly available [Norwegian News Corpus](https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-4/), available from the Språkbanken repository. | |
### Annotations | |
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> | |
In total, the annotators processed 353 passages from Wikipedia and 403 passages from news, creating a | |
total of 4,752 question-answer pairs. | |
#### Annotation process | |
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> | |
The dataset was created in three stages: (i) selecting text passages, (ii) collecting question-answer | |
pairs for those passages, and (iii) human validation of (a subset of) created question-answer pairs. | |
#### Text selection | |
Data was selected from openly available sources from Wikipedia and News data, as described above. | |
#### Question-Answer Pairs | |
The annotators were provided with a set of initial instructions, largely based on those for similar datasets, in particular, the English SQuAD | |
dataset (Rajpurkar et al., 2016) and the GermanQuAD data (Moller et al., 2021). These instructions were subsequently refined following regular | |
meetings with the annotation team. | |
The annotation guidelines provided to the annotators are available (here)[https://github.com/ltgoslo/NorQuAD/blob/main/guidelines.md]. | |
For annotation, we used the Haystack annotation tool which was designed for QA collection. | |
#### Human validation | |
In a separate stage, the annotators validated a subset of the NorQuAD dataset. In this phase each | |
annotator replied to the questions created by the | |
other annotator. We chose the question-answer | |
pairs for validation at random. In total, 1378 questions from the set of question-answer pairs, were | |
answered by validators. | |
#### Who are the annotators? | |
<!-- This section describes the people or systems who created the annotations. --> | |
Two students of the Master’s program in Natural Language Processing at the University of Oslo, | |
both native Norwegian speakers, created question-answer pairs from the collected passages. Each | |
student received separate set of passages for annotation. The students received financial remuneration for their efforts and are co-authors of the | |
paper describing the resource. | |
## Citation | |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> | |
**BibTeX:** | |
@inproceedings{ | |
ivanova2023norquad, | |
title={NorQu{AD}: Norwegian Question Answering Dataset}, | |
author={Sardana Ivanova and Fredrik Aas Andreassen and Matias Jentoft and Sondre Wold and Lilja {\O}vrelid}, | |
booktitle={The 24th Nordic Conference on Computational Linguistics}, | |
year={2023}, | |
url={https://aclanthology.org/2023.nodalida-1.17.pdf} | |
} | |
**APA:** | |
[More Information Needed] | |
## Dataset Card Authors | |
Vladislav Mikhailov and Lilja Øvrelid | |
## Dataset Card Contact | |
vladism@ifi.uio.no and liljao@ifi.uio.no | |