saattrupdan commited on
Commit
2a95948
1 Parent(s): 0c3c2dd

docs: Update readme

Browse files
Files changed (1) hide show
  1. README.md +47 -81
README.md CHANGED
@@ -1,100 +1,69 @@
1
  ---
2
- pretty_name: ScandiQA
3
  language:
4
  - da
5
  - sv
6
  - no
 
 
 
 
7
  license:
8
  - cc-by-sa-4.0
9
  multilinguality:
10
  - multilingual
11
  size_categories:
12
- - 1K<n<10K
13
  source_datasets:
14
- - mkqa
15
- - natural_questions
16
  task_categories:
17
- - question-answering
 
 
18
  task_ids:
19
- - extractive-qa
20
  ---
21
 
22
- # Dataset Card for ScandiQA
23
 
24
  ## Dataset Description
25
 
26
- - **Repository:** <https://github.com/alexandrainst/scandi-qa>
27
  - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
28
- - **Size of downloaded dataset files:** 69 MB
29
- - **Size of the generated dataset:** 67 MB
30
- - **Total amount of disk used:** 136 MB
31
 
32
  ### Dataset Summary
33
 
34
- ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
35
- languages. All samples come from the Natural Questions (NQ) dataset, which is a large
36
- question answering dataset from Google searches. The Scandinavian questions and answers
37
- come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
38
- among others, Danish, Norwegian, and Swedish. However, this did not include a
39
- translated context, hindering the training of extractive question answering models.
40
-
41
- We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
42
- answers" from the NQ dataset, being the paragraph in which the answer was found, or
43
- otherwise we extract the context by locating the paragraphs which have the largest
44
- cosine similarity to the question, and which contains the desired answer.
45
-
46
- Further, many answers in the MKQA dataset were "language normalised": for instance, all
47
- date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
48
- these answers are not appearing in any paragraphs. We solve this by extending the MKQA
49
- answers with plausible "answer candidates", being slight perturbations or translations
50
- of the answer.
51
-
52
- With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
53
- the [DeepL translation service](https://www.deepl.com/pro-api?cta=header-pro-api) for
54
- Danish and Swedish, and the [Google Translation
55
- service](https://cloud.google.com/translate/docs/reference/rest/) for Norwegian. After
56
- translation we ensured that the Scandinavian answers do indeed occur in the translated
57
- contexts.
58
-
59
- As we are filtering the MKQA samples at both the "merging stage" and the "translation
60
- stage", we are not able to fully convert the 10,000 samples to the Scandinavian
61
- languages, and instead get roughly 8,000 samples per language. These have further been
62
- split into a training, validation and test split, with the latter two containing
63
- roughly 750 samples. The splits have been created in such a way that the proportion of
64
- samples without an answer is roughly the same in each split.
65
-
66
 
67
  ### Supported Tasks and Leaderboards
68
 
69
- Training machine learning models for extractive question answering is the intended task
70
- for this dataset. No leaderboard is active at this point.
71
 
72
 
73
  ### Languages
74
 
75
- The dataset is available in Danish (`da`), Swedish (`sv`) and Norwegian (`no`).
 
76
 
77
 
78
  ## Dataset Structure
79
 
80
  ### Data Instances
81
 
82
- - **Size of downloaded dataset files:** 69 MB
83
- - **Size of the generated dataset:** 67 MB
84
- - **Total amount of disk used:** 136 MB
85
 
86
- An example from the `train` split of the `da` subset looks as follows.
87
  ```
88
  {
89
- 'example_id': 123,
90
- 'question': 'Er dette en test?',
91
- 'answer': 'Dette er en test',
92
- 'answer_start': 0,
93
- 'context': 'Dette er en testkontekst.',
94
- 'answer_en': 'This is a test',
95
- 'answer_start_en': 0,
96
- 'context_en': "This is a test",
97
- 'title_en': 'Train test'
98
  }
99
  ```
100
 
@@ -102,38 +71,34 @@ An example from the `train` split of the `da` subset looks as follows.
102
 
103
  The data fields are the same among all splits.
104
 
105
- - `example_id`: an `int64` feature.
106
- - `question`: a `string` feature.
107
- - `answer`: a `string` feature.
108
- - `answer_start`: an `int64` feature.
109
- - `context`: a `string` feature.
110
- - `answer_en`: a `string` feature.
111
- - `answer_start_en`: an `int64` feature.
112
- - `context_en`: a `string` feature.
113
- - `title_en`: a `string` feature.
114
 
115
- ### Data Splits
116
 
117
- | name | train | validation | test |
118
- |----------|------:|-----------:|-----:|
119
- | da | 6311 | 749 | 750 |
120
- | sv | 6299 | 750 | 749 |
121
- | no | 6314 | 749 | 750 |
 
 
 
122
 
123
 
124
  ## Dataset Creation
125
 
126
  ### Curation Rationale
127
 
128
- The Scandinavian languages does not have any gold standard question answering dataset.
129
- This is not quite gold standard, but the fact both the questions and answers are all
130
- manually translated, it is a solid silver standard dataset.
131
 
132
  ### Source Data
133
 
134
- The original data was collected from the [MKQA](https://github.com/apple/ml-mkqa/) and
135
- [Natural Questions](https://ai.google.com/research/NaturalQuestions) datasets from
136
- Apple and Google, respectively.
137
 
138
 
139
  ## Additional Information
@@ -146,4 +111,5 @@ Institute](https://alexandra.dk/) curated this dataset.
146
  ### Licensing Information
147
 
148
  The dataset is licensed under the [CC BY-SA 4.0
149
- license](https://creativecommons.org/licenses/by-sa/4.0/).
 
 
1
  ---
2
+ pretty_name: ScandiWiki
3
  language:
4
  - da
5
  - sv
6
  - no
7
+ - nb
8
+ - nn
9
+ - is
10
+ - fo
11
  license:
12
  - cc-by-sa-4.0
13
  multilinguality:
14
  - multilingual
15
  size_categories:
16
+ - 1M<n<10M
17
  source_datasets:
18
+ - wikipedia
 
19
  task_categories:
20
+ - fill-mask
21
+ - text-generation
22
+ - feature-extraction
23
  task_ids:
24
+ - language-modeling
25
  ---
26
 
27
+ # Dataset Card for ScandiWiki
28
 
29
  ## Dataset Description
30
 
 
31
  - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
32
+ - **Size of downloaded dataset files:** 4200 MB
33
+ - **Size of the generated dataset:** 4200 MB
34
+ - **Total amount of disk used:** 8400 MB
35
 
36
  ### Dataset Summary
37
 
38
+ ScandiWiki is a parsed and deduplicated Wikipedia dump in Danish, Norwegian Bokmål,
39
+ Norwegian Nynorsk, Swedish, Icelandic and Faroese.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ### Supported Tasks and Leaderboards
42
 
43
+ This dataset is intended for general language modelling.
 
44
 
45
 
46
  ### Languages
47
 
48
+ The dataset is available in Danish (`da`), Swedish (`sv`), Norwegian Bokmål (`nb`),
49
+ Norwegian Nynorsk (`nn`), Icelandic (`is`) and Faroese (`fo`).
50
 
51
 
52
  ## Dataset Structure
53
 
54
  ### Data Instances
55
 
56
+ - **Size of downloaded dataset files:** 4200 MB
57
+ - **Size of the generated dataset:** 4200 MB
58
+ - **Total amount of disk used:** 8400 MB
59
 
60
+ An example from the `train` split of the `fo` subset looks as follows.
61
  ```
62
  {
63
+ 'id': '3380',
64
+ 'url': 'https://fo.wikipedia.org/wiki/Enk%C3%B6pings%20kommuna',
65
+ 'title': 'Enköpings kommuna',
66
+ 'text': 'Enköpings kommuna (svenskt: Enköpings kommun), er ein kommuna í Uppsala län í Svøríki. Enköpings kommuna hevur umleið 40.656 íbúgvar (2013).\n\nKeldur \n\nKommunur í Svøríki'
 
 
 
 
 
67
  }
68
  ```
69
 
 
71
 
72
  The data fields are the same among all splits.
73
 
74
+ - `id`: a `string` feature.
75
+ - `url`: a `string` feature.
76
+ - `title`: a `string` feature.
77
+ - `text`: a `string` feature.
 
 
 
 
 
78
 
79
+ ### Data Subsets
80
 
81
+ | name | samples |
82
+ |----------|----------:|
83
+ | da | 287,216 |
84
+ | sv | 2,469,978 |
85
+ | nb | 596,593 |
86
+ | nn | 162,776 |
87
+ | is | 55,418 |
88
+ | fo | 12,582 |
89
 
90
 
91
  ## Dataset Creation
92
 
93
  ### Curation Rationale
94
 
95
+ It takes quite a long time to parse the Wikipedia dump as well as to deduplicate it, so
96
+ this dataset is primarily for convenience.
 
97
 
98
  ### Source Data
99
 
100
+ The original data is from the [wikipedia
101
+ dataset](https://huggingface.co/datasets/wikipedia).
 
102
 
103
 
104
  ## Additional Information
 
111
  ### Licensing Information
112
 
113
  The dataset is licensed under the [CC BY-SA 4.0
114
+ license](https://creativecommons.org/licenses/by-sa/4.0/), in accordance with the same
115
+ license of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia).