Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
question-generation
License:
init
Browse files- README.md +9 -9
- data/processed/dev00.jsonl +0 -0
- data/processed/dev01.jsonl +0 -0
- data/processed/dev02.jsonl +0 -0
- data/processed/dev03.jsonl +0 -0
- data/processed/test00.jsonl +0 -0
- data/processed/test01.jsonl +0 -0
- data/processed/test02.jsonl +0 -0
- data/processed/test03.jsonl +0 -0
- data/processed/train00.jsonl +0 -0
- data/processed/train01.jsonl +0 -0
- data/processed/train02.jsonl +0 -0
- data/processed/train03.jsonl +0 -0
- data/processed/train04.jsonl +0 -0
- data/processed/train05.jsonl +0 -0
- data/processed/train06.jsonl +0 -0
- data/processed/train07.jsonl +0 -0
- data/processed/train08.jsonl +0 -0
- data/processed/train09.jsonl +0 -0
- data/processed/train10.jsonl +0 -0
- data/processed/train11.jsonl +0 -0
- data/processed/train12.jsonl +0 -0
- data/processed/train13.jsonl +0 -0
- data/processed/train14.jsonl +0 -0
- data/processed/train15.jsonl +0 -0
- data/processed/train16.jsonl +0 -0
- data/processed/train17.jsonl +0 -0
- data/processed/train18.jsonl +0 -0
- data/processed/train19.jsonl +0 -0
- data/processed/train20.jsonl +0 -0
- data/processed/train21.jsonl +0 -0
- data/processed/train22.jsonl +0 -0
- process.py +14 -16
- qg_squad.py +3 -6
- reference_files/{ans-test.txt β answer-test-truecase.txt} +0 -0
- reference_files/{ans-test-normalized.txt β answer-test.txt} +0 -0
- reference_files/{ans-dev.txt β answer-validation-truecase.txt} +0 -0
- reference_files/{ans-dev-normalized.txt β answer-validation.txt} +0 -0
- reference_files/{para-test.txt β paragraph-test-truecase.txt} +0 -0
- reference_files/{para-test-normalized.txt β paragraph-test.txt} +0 -0
- reference_files/{para-dev.txt β paragraph-validation-truecase.txt} +0 -0
- reference_files/{para-dev-normalized.txt β paragraph-validation.txt} +0 -0
- reference_files/{tgt-test-normalized.txt β question-test-normalized.txt} +0 -0
- reference_files/{tgt-test.txt β question-test-truecase.txt} +0 -0
- reference_files/{tgt-dev.txt β question-validation-truecase.txt} +0 -0
- reference_files/{tgt-dev-normalized.txt β question-validation.txt} +0 -0
- reference_files/{src-test.txt β sentence-test-truecase.txt} +0 -0
- reference_files/{src-test-normalized.txt β sentence-test.txt} +0 -0
- reference_files/{src-dev.txt β sentence-validation-truecase.txt} +0 -0
- reference_files/{src-dev-normalized.txt β sentence-validation.txt} +0 -0
README.md
CHANGED
@@ -64,11 +64,11 @@ An example of 'train' looks as follows.
|
|
64 |
```
|
65 |
{
|
66 |
"question": "What is heresy mainly at odds with?",
|
67 |
-
"
|
68 |
"answer": "established beliefs or customs",
|
69 |
"sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
|
70 |
-
"
|
71 |
-
"
|
72 |
"sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
|
73 |
}
|
74 |
```
|
@@ -76,16 +76,16 @@ An example of 'train' looks as follows.
|
|
76 |
The data fields are the same among all splits.
|
77 |
#### plain_text
|
78 |
- `question`: a `string` feature.
|
79 |
-
- `
|
80 |
- `answer`: a `string` feature.
|
81 |
- `sentence`: a `string` feature.
|
82 |
-
- `
|
83 |
-
- `
|
84 |
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
|
85 |
|
86 |
-
Each of `
|
87 |
-
but with different information. The `
|
88 |
-
`
|
89 |
|
90 |
### Data Splits
|
91 |
|
|
|
64 |
```
|
65 |
{
|
66 |
"question": "What is heresy mainly at odds with?",
|
67 |
+
"paragraph": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
|
68 |
"answer": "established beliefs or customs",
|
69 |
"sentence": "Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs .",
|
70 |
+
"paragraph_sentence": "<hl> Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs . <hl> A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
|
71 |
+
"paragraph_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl>. A heretic is a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or sacred things.",
|
72 |
"sentence_answer": "Heresy is any provocative belief or theory that is strongly at variance with <hl> established beliefs or customs <hl> ."
|
73 |
}
|
74 |
```
|
|
|
76 |
The data fields are the same among all splits.
|
77 |
#### plain_text
|
78 |
- `question`: a `string` feature.
|
79 |
+
- `paragraph`: a `string` feature.
|
80 |
- `answer`: a `string` feature.
|
81 |
- `sentence`: a `string` feature.
|
82 |
+
- `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`.
|
83 |
+
- `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`.
|
84 |
- `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`.
|
85 |
|
86 |
+
Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model,
|
87 |
+
but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and
|
88 |
+
`paragraph_sentence` feature is for sentence-aware question generation.
|
89 |
|
90 |
### Data Splits
|
91 |
|
data/processed/dev00.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/dev01.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/dev02.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/dev03.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/test00.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/test01.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/test02.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/test03.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train00.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train01.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train02.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train03.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train04.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train05.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train06.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train07.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train08.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train09.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train10.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train11.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train12.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train13.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train14.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train15.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train16.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train17.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train18.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train19.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train20.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train21.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/processed/train22.jsonl
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
process.py
CHANGED
@@ -32,12 +32,12 @@ def jsonline_reader(filename: str):
|
|
32 |
|
33 |
def process_single_data(data: Dict):
|
34 |
""" Convert single raw json data into QG format """
|
35 |
-
example = {'question': data["question"], '
|
36 |
|
37 |
# get sentence
|
38 |
-
position = example['
|
39 |
assert position != -1
|
40 |
-
before_tmp = get_sentence(example['
|
41 |
if len(before_tmp) == 0:
|
42 |
before = ''
|
43 |
before_sentence = ''
|
@@ -49,7 +49,7 @@ def process_single_data(data: Dict):
|
|
49 |
before = ' '.join(before_tmp[:-1])
|
50 |
before_sentence = before_tmp[-1]
|
51 |
before_sentence = before_sentence if before_sentence.endswith(' ') else '{} '.format(before_sentence)
|
52 |
-
after_tmp = get_sentence(example['
|
53 |
if len(after_tmp) == 0:
|
54 |
after = ''
|
55 |
after_sentence = ''
|
@@ -59,29 +59,27 @@ def process_single_data(data: Dict):
|
|
59 |
after_sentence = after_sentence if after_sentence.startswith(' ') else ' {}'.format(after_sentence)
|
60 |
example['sentence'] = '{}{}{}'.format(before_sentence, example['answer'], after_sentence)
|
61 |
|
62 |
-
# get
|
63 |
before = '' if before == '' else '{} '.format(before)
|
64 |
after = '' if after == '' else ' {}'.format(after)
|
65 |
source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['sentence'], after)
|
66 |
-
example['
|
67 |
|
68 |
-
# get
|
69 |
source_text = '{0}{1} {2} {1}{3}'.format(
|
70 |
-
example['
|
71 |
-
example['
|
72 |
-
example['
|
73 |
|
74 |
# get sentence_answer
|
75 |
-
|
76 |
-
if len(before) == 0 or before[-1].endswith('.'):
|
77 |
before = ''
|
78 |
else:
|
79 |
-
before =
|
80 |
-
|
81 |
-
if len(after) == 0:
|
82 |
after = ''
|
83 |
else:
|
84 |
-
after =
|
85 |
source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['answer'], after)
|
86 |
example['sentence_answer'] = re.sub(r'\s+', ' ', source_text)
|
87 |
|
|
|
32 |
|
33 |
def process_single_data(data: Dict):
|
34 |
""" Convert single raw json data into QG format """
|
35 |
+
example = {'question': data["question"], 'paragraph': data["context"], 'answer': data["answer"]}
|
36 |
|
37 |
# get sentence
|
38 |
+
position = example['paragraph'].find(example['answer'])
|
39 |
assert position != -1
|
40 |
+
before_tmp = get_sentence(example['paragraph'][:position])
|
41 |
if len(before_tmp) == 0:
|
42 |
before = ''
|
43 |
before_sentence = ''
|
|
|
49 |
before = ' '.join(before_tmp[:-1])
|
50 |
before_sentence = before_tmp[-1]
|
51 |
before_sentence = before_sentence if before_sentence.endswith(' ') else '{} '.format(before_sentence)
|
52 |
+
after_tmp = get_sentence(example['paragraph'][position + len(example['answer']):])
|
53 |
if len(after_tmp) == 0:
|
54 |
after = ''
|
55 |
after_sentence = ''
|
|
|
59 |
after_sentence = after_sentence if after_sentence.startswith(' ') else ' {}'.format(after_sentence)
|
60 |
example['sentence'] = '{}{}{}'.format(before_sentence, example['answer'], after_sentence)
|
61 |
|
62 |
+
# get paragraph_sentence
|
63 |
before = '' if before == '' else '{} '.format(before)
|
64 |
after = '' if after == '' else ' {}'.format(after)
|
65 |
source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['sentence'], after)
|
66 |
+
example['paragraph_sentence'] = re.sub(r'\s+', ' ', source_text)
|
67 |
|
68 |
+
# get paragraph_answer
|
69 |
source_text = '{0}{1} {2} {1}{3}'.format(
|
70 |
+
example['paragraph'][:position], HIGHLIGHT_TOKEN, example['answer'],
|
71 |
+
example['paragraph'][position + len(example['answer']):])
|
72 |
+
example['paragraph_answer'] = re.sub(r'\s+', ' ', source_text)
|
73 |
|
74 |
# get sentence_answer
|
75 |
+
if len(before_tmp) == 0 or before_tmp[-1].endswith('.'):
|
|
|
76 |
before = ''
|
77 |
else:
|
78 |
+
before = before_tmp[-1] if before_tmp[-1].endswith(' ') else '{} '.format(before_tmp[-1])
|
79 |
+
if len(after_tmp) == 0:
|
|
|
80 |
after = ''
|
81 |
else:
|
82 |
+
after = after_tmp[0] if after_tmp[0].startswith(' ') else ' {}'.format(after_tmp[0])
|
83 |
source_text = '{0}{1} {2} {1}{3}'.format(before, HIGHLIGHT_TOKEN, example['answer'], after)
|
84 |
example['sentence_answer'] = re.sub(r'\s+', ' ', source_text)
|
85 |
|
qg_squad.py
CHANGED
@@ -38,16 +38,13 @@ class QGSquad(datasets.GeneratorBasedBuilder):
|
|
38 |
"answer": datasets.Value("string"),
|
39 |
"question": datasets.Value("string"),
|
40 |
"sentence": datasets.Value("string"),
|
41 |
-
"
|
42 |
"sentence_answer": datasets.Value("string"),
|
43 |
-
"
|
44 |
-
"
|
45 |
}
|
46 |
),
|
47 |
supervised_keys=None,
|
48 |
-
# task_templates=[
|
49 |
-
# Summarization(task='question generation', text_column="passage_answer", summary_column='question')
|
50 |
-
# ],
|
51 |
homepage="https://github.com/asahi417/lm-question-generation"
|
52 |
)
|
53 |
|
|
|
38 |
"answer": datasets.Value("string"),
|
39 |
"question": datasets.Value("string"),
|
40 |
"sentence": datasets.Value("string"),
|
41 |
+
"paragraph": datasets.Value("string"),
|
42 |
"sentence_answer": datasets.Value("string"),
|
43 |
+
"paragraph_answer": datasets.Value("string"),
|
44 |
+
"paragraph_sentence": datasets.Value("string")
|
45 |
}
|
46 |
),
|
47 |
supervised_keys=None,
|
|
|
|
|
|
|
48 |
homepage="https://github.com/asahi417/lm-question-generation"
|
49 |
)
|
50 |
|
reference_files/{ans-test.txt β answer-test-truecase.txt}
RENAMED
File without changes
|
reference_files/{ans-test-normalized.txt β answer-test.txt}
RENAMED
File without changes
|
reference_files/{ans-dev.txt β answer-validation-truecase.txt}
RENAMED
File without changes
|
reference_files/{ans-dev-normalized.txt β answer-validation.txt}
RENAMED
File without changes
|
reference_files/{para-test.txt β paragraph-test-truecase.txt}
RENAMED
File without changes
|
reference_files/{para-test-normalized.txt β paragraph-test.txt}
RENAMED
File without changes
|
reference_files/{para-dev.txt β paragraph-validation-truecase.txt}
RENAMED
File without changes
|
reference_files/{para-dev-normalized.txt β paragraph-validation.txt}
RENAMED
File without changes
|
reference_files/{tgt-test-normalized.txt β question-test-normalized.txt}
RENAMED
File without changes
|
reference_files/{tgt-test.txt β question-test-truecase.txt}
RENAMED
File without changes
|
reference_files/{tgt-dev.txt β question-validation-truecase.txt}
RENAMED
File without changes
|
reference_files/{tgt-dev-normalized.txt β question-validation.txt}
RENAMED
File without changes
|
reference_files/{src-test.txt β sentence-test-truecase.txt}
RENAMED
File without changes
|
reference_files/{src-test-normalized.txt β sentence-test.txt}
RENAMED
File without changes
|
reference_files/{src-dev.txt β sentence-validation-truecase.txt}
RENAMED
File without changes
|
reference_files/{src-dev-normalized.txt β sentence-validation.txt}
RENAMED
File without changes
|