Datasets:
Tasks:
Text2Text Generation
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
split-and-rephrase
License:
Update files from the datasets library (from 1.4.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.4.0
README.md
CHANGED
@@ -27,7 +27,7 @@
|
|
27 |
- [Citation Information](#citation-information)
|
28 |
- [Contributions](#contributions)
|
29 |
|
30 |
-
##
|
31 |
|
32 |
- **Homepage:** [https://dataset-homepage/](https://dataset-homepage/)
|
33 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
@@ -37,25 +37,25 @@
|
|
37 |
- **Size of the generated dataset:** 370.41 MB
|
38 |
- **Total amount of disk used:** 466.04 MB
|
39 |
|
40 |
-
###
|
41 |
|
42 |
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
|
43 |
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
|
44 |
the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
|
45 |
|
46 |
-
###
|
47 |
|
48 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
49 |
|
50 |
-
###
|
51 |
|
52 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
53 |
|
54 |
-
##
|
55 |
|
56 |
We show detailed information for up to 5 configurations of the dataset.
|
57 |
|
58 |
-
###
|
59 |
|
60 |
#### default
|
61 |
|
@@ -72,7 +72,7 @@ An example of 'train' looks as follows.
|
|
72 |
}
|
73 |
```
|
74 |
|
75 |
-
###
|
76 |
|
77 |
The data fields are the same among all splits.
|
78 |
|
@@ -81,55 +81,55 @@ The data fields are the same among all splits.
|
|
81 |
- `simple_sentence_1`: a `string` feature.
|
82 |
- `simple_sentence_2`: a `string` feature.
|
83 |
|
84 |
-
###
|
85 |
|
86 |
| name |train |validation|test|
|
87 |
|-------|-----:|---------:|---:|
|
88 |
|default|989944| 5000|5000|
|
89 |
|
90 |
-
##
|
91 |
|
92 |
-
###
|
93 |
|
94 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
95 |
|
96 |
-
###
|
97 |
|
98 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
99 |
|
100 |
-
###
|
101 |
|
102 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
103 |
|
104 |
-
###
|
105 |
|
106 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
107 |
|
108 |
-
##
|
109 |
|
110 |
-
###
|
111 |
|
112 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
113 |
|
114 |
-
###
|
115 |
|
116 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
117 |
|
118 |
-
###
|
119 |
|
120 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
|
122 |
-
##
|
123 |
|
124 |
-
###
|
125 |
|
126 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
|
128 |
-
###
|
129 |
|
130 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
131 |
|
132 |
-
###
|
133 |
|
134 |
```
|
135 |
@InProceedings{BothaEtAl2018,
|
|
|
27 |
- [Citation Information](#citation-information)
|
28 |
- [Contributions](#contributions)
|
29 |
|
30 |
+
## Dataset Description
|
31 |
|
32 |
- **Homepage:** [https://dataset-homepage/](https://dataset-homepage/)
|
33 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
|
|
37 |
- **Size of the generated dataset:** 370.41 MB
|
38 |
- **Total amount of disk used:** 466.04 MB
|
39 |
|
40 |
+
### Dataset Summary
|
41 |
|
42 |
One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
|
43 |
Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
|
44 |
the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
|
45 |
|
46 |
+
### Supported Tasks
|
47 |
|
48 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
49 |
|
50 |
+
### Languages
|
51 |
|
52 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
53 |
|
54 |
+
## Dataset Structure
|
55 |
|
56 |
We show detailed information for up to 5 configurations of the dataset.
|
57 |
|
58 |
+
### Data Instances
|
59 |
|
60 |
#### default
|
61 |
|
|
|
72 |
}
|
73 |
```
|
74 |
|
75 |
+
### Data Fields
|
76 |
|
77 |
The data fields are the same among all splits.
|
78 |
|
|
|
81 |
- `simple_sentence_1`: a `string` feature.
|
82 |
- `simple_sentence_2`: a `string` feature.
|
83 |
|
84 |
+
### Data Splits Sample Size
|
85 |
|
86 |
| name |train |validation|test|
|
87 |
|-------|-----:|---------:|---:|
|
88 |
|default|989944| 5000|5000|
|
89 |
|
90 |
+
## Dataset Creation
|
91 |
|
92 |
+
### Curation Rationale
|
93 |
|
94 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
95 |
|
96 |
+
### Source Data
|
97 |
|
98 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
99 |
|
100 |
+
### Annotations
|
101 |
|
102 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
103 |
|
104 |
+
### Personal and Sensitive Information
|
105 |
|
106 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
107 |
|
108 |
+
## Considerations for Using the Data
|
109 |
|
110 |
+
### Social Impact of Dataset
|
111 |
|
112 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
113 |
|
114 |
+
### Discussion of Biases
|
115 |
|
116 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
117 |
|
118 |
+
### Other Known Limitations
|
119 |
|
120 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
|
122 |
+
## Additional Information
|
123 |
|
124 |
+
### Dataset Curators
|
125 |
|
126 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
127 |
|
128 |
+
### Licensing Information
|
129 |
|
130 |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
131 |
|
132 |
+
### Citation Information
|
133 |
|
134 |
```
|
135 |
@InProceedings{BothaEtAl2018,
|