Datasets:
Tasks:
Text2Text Generation
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
split-and-rephrase
License:
File size: 7,214 Bytes
97b3e9e 172e439 eb6c517 367e693 172e439 724dd50 172e439 3e0060e 172e439 3e0060e 8d78341 97b3e9e dc5ef44 97b3e9e dc5ef44 97b3e9e 84dd1af 97b3e9e 172e439 97b3e9e 9f17d5e 97b3e9e 84dd1af 97b3e9e dc5ef44 97b3e9e 172e439 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 9f17d5e 97b3e9e 84dd1af 97b3e9e dc5ef44 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e dc5ef44 97b3e9e 84dd1af 97b3e9e dc5ef44 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 84dd1af 97b3e9e 172e439 97b3e9e 84dd1af 97b3e9e 172e439 97b3e9e 3e0060e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: WikiSplit
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: wikisplit
tags:
- split-and-rephrase
dataset_info:
features:
- name: complex_sentence
dtype: string
- name: simple_sentence_1
dtype: string
- name: simple_sentence_2
dtype: string
splits:
- name: test
num_bytes: 1949294
num_examples: 5000
- name: train
num_bytes: 384513073
num_examples: 989944
- name: validation
num_bytes: 1935459
num_examples: 5000
download_size: 100279164
dataset_size: 388397826
---
# Dataset Card for "wiki_split"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dataset-homepage/](https://dataset-homepage/)
- **Repository:** https://github.com/google-research-datasets/wiki-split
- **Paper:** [Learning To Split and Rephrase From Wikipedia Edit History](https://arxiv.org/abs/1808.09468)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 100.28 MB
- **Size of the generated dataset:** 388.40 MB
- **Total amount of disk used:** 488.68 MB
### Dataset Summary
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
### Supported Tasks and Leaderboards
- Split and Rephrase
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 100.28 MB
- **Size of the generated dataset:** 388.40 MB
- **Total amount of disk used:** 488.68 MB
An example of 'train' looks as follows.
```
{
"complex_sentence": " '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish her poems and stories started to have differing meanings in their respective languages .",
"simple_sentence_1": "' '' As she translates from one language to another , she tries to find the appropriate wording and context in English that would correspond to the work in Spanish . ",
"simple_sentence_2": " Ergo , her poems and stories started to have differing meanings in their respective languages ."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `complex_sentence`: a `string` feature.
- `simple_sentence_1`: a `string` feature.
- `simple_sentence_2`: a `string` feature.
### Data Splits
| name |train |validation|test|
|-------|-----:|---------:|---:|
|default|989944| 5000|5000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The WikiSplit dataset is a verbatim copy of certain content from the publicly available Wikipedia revision history.
The dataset is therefore licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/).
Any third party content or data is provided "As Is" without any warranty, express or implied.
### Citation Information
```
@inproceedings{botha-etal-2018-learning,
title = "Learning To Split and Rephrase From {W}ikipedia Edit History",
author = "Botha, Jan A. and
Faruqui, Manaal and
Alex, John and
Baldridge, Jason and
Das, Dipanjan",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D18-1080",
doi = "10.18653/v1/D18-1080",
pages = "732--737",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset. |