|
--- |
|
language: |
|
- en |
|
license: cc-by-sa-4.0 |
|
size_categories: |
|
- 10M<n<100M |
|
task_categories: |
|
- text2text-generation |
|
pretty_name: WikiSplit++ |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: complex |
|
dtype: string |
|
- name: simple_reversed |
|
dtype: string |
|
- name: simple_tokenized |
|
sequence: string |
|
- name: simple_original |
|
dtype: string |
|
- name: entailment_prob |
|
dtype: float64 |
|
- name: split |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 380811358.0 |
|
num_examples: 504375 |
|
- name: validation |
|
num_bytes: 47599265.0 |
|
num_examples: 63065 |
|
- name: test |
|
num_bytes: 47559833.0 |
|
num_examples: 62993 |
|
download_size: 337857760 |
|
dataset_size: 475970456.0 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: validation |
|
path: data/validation-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
# WikiSplit++ |
|
|
|
This dataset is the HuggingFace version of WikiSplit++. |
|
WikiSplit++ enhances the original WikiSplit by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original WikiSplit. |
|
The preprocessed WikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/wikisplit). |
|
|
|
## Usage |
|
|
|
```python |
|
import datasets as ds |
|
|
|
dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/wikisplit-pp", split="train") |
|
|
|
print(dataset) |
|
|
|
# DatasetDict({ |
|
# train: Dataset({ |
|
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'], |
|
# num_rows: 504375 |
|
# }) |
|
# validation: Dataset({ |
|
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'], |
|
# num_rows: 63065 |
|
# }) |
|
# test: Dataset({ |
|
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob', 'split'], |
|
# num_rows: 62993 |
|
# }) |
|
# }) |
|
``` |
|
|
|
### Data Fields |
|
|
|
- id: The ID of the data (note that it is not compatible with the existing WikiSplit) |
|
- complex: A complex sentence |
|
- simple_reversed: Simple sentences with their order reversed |
|
- simple_tokenized: A list of simple sentences split by [PySBD](https://github.com/nipunsadvilkar/pySBD), not reversed in order (often consists of 2 elements) |
|
- simple_original: Simple sentences in their original order |
|
- entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. [DeBERTa-xxl](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli) is used for the NLI classification. |
|
- split: Indicates which split (train, val, or tune) this data belonged to in the original WikiSplit dataset |
|
|
|
## Paper |
|
|
|
Tsukagoshi et al., [WikiSplit++: Easy Data Refinement for Split and Rephrase](https://arxiv.org/abs/2404.09002), LREC-COLING 2024. |
|
|
|
## Abstract |
|
|
|
The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP). |
|
However, while Split and Rephrase can be improved using a text-to-text generation approach that applies encoder-decoder models fine-tuned with a large-scale dataset, it still suffers from hallucinations and under-splitting. |
|
To address these issues, this paper presents a simple and strong data refinement approach. |
|
Here, we create WikiSplit++ by removing instances in WikiSplit where complex sentences do not entail at least one of the simpler sentences and reversing the order of reference simple sentences. |
|
Experimental results show that training with WikiSplit++ leads to better performance than training with WikiSplit, even with fewer training instances. |
|
In particular, our approach yields significant gains in the number of splits and the entailment ratio, a proxy for measuring hallucinations. |
|
|
|
## License |
|
|
|
[WikiSplit](https://github.com/google-research-datasets/wiki-split) is distributed under the CC-BY-SA 4.0 license. |
|
This dataset follows suit and is distributed under the CC-BY-SA 4.0 license. |
|
|