Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
Japanese
Size:
10K - 100K
ArXiv:
Tags:
question-generation
License:
update
Browse files- .gitattributes +3 -0
- README.md +83 -0
- data/processed/test.jsonl +3 -0
- data/processed/train.jsonl +3 -0
- data/processed/validation.jsonl +3 -0
- process.py +38 -0
- qag_jaquad.py +0 -0
.gitattributes
CHANGED
@@ -52,3 +52,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
52 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
53 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
55 |
+
data/processed/test.jsonl filter=lfs diff=lfs merge=lfs -text
|
56 |
+
data/processed/train.jsonl filter=lfs diff=lfs merge=lfs -text
|
57 |
+
data/processed/validation.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
pretty_name: SQuAD for question generation
|
4 |
+
language: en
|
5 |
+
multilinguality: monolingual
|
6 |
+
size_categories: 1k<n<10K
|
7 |
+
source_datasets: tweet_qa
|
8 |
+
task_categories:
|
9 |
+
- text-generation
|
10 |
+
task_ids:
|
11 |
+
- language-modeling
|
12 |
+
tags:
|
13 |
+
- question-generation
|
14 |
+
---
|
15 |
+
|
16 |
+
# Dataset Card for "lmqg/qag_squad"
|
17 |
+
|
18 |
+
|
19 |
+
## Dataset Description
|
20 |
+
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
|
21 |
+
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
|
22 |
+
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
|
23 |
+
|
24 |
+
### Dataset Summary
|
25 |
+
This is the question & answer generation dataset based on the SQuAD.
|
26 |
+
|
27 |
+
### Supported Tasks and Leaderboards
|
28 |
+
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
|
29 |
+
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
|
30 |
+
|
31 |
+
### Languages
|
32 |
+
English (en)
|
33 |
+
|
34 |
+
## Dataset Structure
|
35 |
+
An example of 'train' looks as follows.
|
36 |
+
```
|
37 |
+
{
|
38 |
+
"paragraph": "\"4 Minutes\" was released as the album's lead single and peaked at number three on the Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist; \"4 Minutes\" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album, Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million, it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European dates, and after it ended, the total gross was $408 million.",
|
39 |
+
"questions": [
|
40 |
+
"Which single was released as the album's lead single?",
|
41 |
+
"Madonna surpassed which artist with the most top-ten hits?",
|
42 |
+
"4 minutes became Madonna's which number one single in the UK?",
|
43 |
+
"What is the name of the first tour with Live Nation?",
|
44 |
+
"How much did Stick and Sweet Tour grossed?"
|
45 |
+
],
|
46 |
+
"answers": [
|
47 |
+
"4 Minutes",
|
48 |
+
"Elvis Presley",
|
49 |
+
"thirteenth",
|
50 |
+
"Sticky & Sweet Tour",
|
51 |
+
"$280 million,"
|
52 |
+
],
|
53 |
+
"questions_answers": "question: Which single was released as the album's lead single?, answer: 4 Minutes | question: Madonna surpassed which artist with the most top-ten hits?, answer: Elvis Presley | question: 4 minutes became Madonna's which number one single in the UK?, answer: thirteenth | question: What is the name of the first tour with Live Nation?, answer: Sticky & Sweet Tour | question: How much did Stick and Sweet Tour grossed?, answer: $280 million,"
|
54 |
+
}
|
55 |
+
```
|
56 |
+
The data fields are the same among all splits.
|
57 |
+
- `questions`: a `list` of `string` features.
|
58 |
+
- `answers`: a `list` of `string` features.
|
59 |
+
- `paragraph`: a `string` feature.
|
60 |
+
- `questions_answers`: a `string` feature.
|
61 |
+
|
62 |
+
## Data Splits
|
63 |
+
|
64 |
+
|train|validation|test |
|
65 |
+
|----:|---------:|----:|
|
66 |
+
|16462| 2067 | 2429|
|
67 |
+
|
68 |
+
|
69 |
+
## Citation Information
|
70 |
+
|
71 |
+
```
|
72 |
+
@inproceedings{ushio-etal-2022-generative,
|
73 |
+
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
|
74 |
+
author = "Ushio, Asahi and
|
75 |
+
Alva-Manchego, Fernando and
|
76 |
+
Camacho-Collados, Jose",
|
77 |
+
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
|
78 |
+
month = dec,
|
79 |
+
year = "2022",
|
80 |
+
address = "Abu Dhabi, U.A.E.",
|
81 |
+
publisher = "Association for Computational Linguistics",
|
82 |
+
}
|
83 |
+
```
|
data/processed/test.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b6ae25375dbb410b6cd0d6b774ef54a6f0660323104ef0338c192e26d3af9b54
|
3 |
+
size 8803317
|
data/processed/train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d151c9a8defecd2269f922f7e47a99e0e433f5f485fe719c8ac71a624c64d4ea
|
3 |
+
size 33606848
|
data/processed/validation.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66598b94026ceb0287d8c4d83f4835bb043eab9482cac57d937f99c74c0d4329
|
3 |
+
size 4814912
|
process.py
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import json
|
2 |
+
import os
|
3 |
+
from random import seed, shuffle
|
4 |
+
import re
|
5 |
+
from tqdm import tqdm
|
6 |
+
from typing import Dict
|
7 |
+
from datasets import load_dataset
|
8 |
+
|
9 |
+
|
10 |
+
SEP_TOKEN = " | "
|
11 |
+
|
12 |
+
|
13 |
+
def create_data(hf_data):
|
14 |
+
df = hf_data.to_pandas()
|
15 |
+
output = []
|
16 |
+
for paragraph, g in df.groupby("paragraph"):
|
17 |
+
example = {
|
18 |
+
'paragraph': paragraph.replace(SEP_TOKEN, " "),
|
19 |
+
'questions': [_g.replace(SEP_TOKEN, " ") for _g in g['question']],
|
20 |
+
'answers': [_g.replace(SEP_TOKEN, " ") for _g in g['answer']],
|
21 |
+
}
|
22 |
+
example["questions_answers"] = SEP_TOKEN.join([f"question: {q}, answer: {a}" for q, a in zip(example["questions"], example["answers"])])
|
23 |
+
output.append(example)
|
24 |
+
return output
|
25 |
+
|
26 |
+
|
27 |
+
if __name__ == '__main__':
|
28 |
+
qg_squad = load_dataset("lmqg/qg_jaquad")
|
29 |
+
data_valid = create_data(qg_squad['validation'])
|
30 |
+
data_train = create_data(qg_squad['train'])
|
31 |
+
data_test = create_data(qg_squad['test'])
|
32 |
+
data_all = {'train': data_train, 'validation': data_valid, 'test': data_test}
|
33 |
+
output = './data/processed'
|
34 |
+
os.makedirs(output, exist_ok=True)
|
35 |
+
for k, _data in data_all.items():
|
36 |
+
with open('{}/{}.jsonl'.format(output, k), 'w') as f:
|
37 |
+
for single_data in tqdm(_data):
|
38 |
+
f.write(json.dumps(single_data) + '\n')
|
qag_jaquad.py
ADDED
File without changes
|