Datasets:
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WikiSplit
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids:
- text2text-generation-other-split-and-rephrase
paperswithcode_id: wikisplit
Dataset Card for "wiki_split"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://dataset-homepage/
- Repository: https://github.com/google-research-datasets/wiki-split
- Paper: Learning To Split and Rephrase From Wikipedia Edit History
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 95.63 MB
- Size of the generated dataset: 370.41 MB
- Total amount of disk used: 466.04 MB
Dataset Summary
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
Supported Tasks and Leaderboards
- Split and Rephrase
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 95.63 MB
- Size of the generated dataset: 370.41 MB
- Total amount of disk used: 466.04 MB
An example of 'train' looks as follows.
{
"complex_sentence": " '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish her poems and stories started to have differing meanings in their respective languages .",
"simple_sentence_1": "' '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish . ",
"simple_sentence_2": " Ergo , her poems and stories started to have differing meanings in their respective languages ."
}
Data Fields
The data fields are the same among all splits.
default
complex_sentence
: astring
feature.simple_sentence_1
: astring
feature.simple_sentence_2
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 989944 | 5000 | 5000 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
The WikiSplit dataset is a verbatim copy of certain content from the publicly available Wikipedia revision history. The dataset is therefore licensed under CC BY-SA 4.0. Any third party content or data is provided "As Is" without any warranty, express or implied.
Citation Information
@inproceedings{botha-etal-2018-learning,
title = "Learning To Split and Rephrase From {W}ikipedia Edit History",
author = "Botha, Jan A. and
Faruqui, Manaal and
Alex, John and
Baldridge, Jason and
Das, Dipanjan",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D18-1080",
doi = "10.18653/v1/D18-1080",
pages = "732--737",
}
Contributions
Thanks to @thomwolf, @patrickvonplaten, @albertvillanova, @lewtun for adding this dataset.