Datasets:

Languages:
English
ArXiv:
License:
system HF staff commited on
Commit
84dd1af
1 Parent(s): 97b3e9e

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (1) hide show
  1. README.md +21 -21
README.md CHANGED
@@ -27,7 +27,7 @@
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
- ## [Dataset Description](#dataset-description)
31
 
32
  - **Homepage:** [https://dataset-homepage/](https://dataset-homepage/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -37,25 +37,25 @@
37
  - **Size of the generated dataset:** 370.41 MB
38
  - **Total amount of disk used:** 466.04 MB
39
 
40
- ### [Dataset Summary](#dataset-summary)
41
 
42
  One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
43
  Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
44
  the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
45
 
46
- ### [Supported Tasks](#supported-tasks)
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
50
- ### [Languages](#languages)
51
 
52
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
 
54
- ## [Dataset Structure](#dataset-structure)
55
 
56
  We show detailed information for up to 5 configurations of the dataset.
57
 
58
- ### [Data Instances](#data-instances)
59
 
60
  #### default
61
 
@@ -72,7 +72,7 @@ An example of 'train' looks as follows.
72
  }
73
  ```
74
 
75
- ### [Data Fields](#data-fields)
76
 
77
  The data fields are the same among all splits.
78
 
@@ -81,55 +81,55 @@ The data fields are the same among all splits.
81
  - `simple_sentence_1`: a `string` feature.
82
  - `simple_sentence_2`: a `string` feature.
83
 
84
- ### [Data Splits Sample Size](#data-splits-sample-size)
85
 
86
  | name |train |validation|test|
87
  |-------|-----:|---------:|---:|
88
  |default|989944| 5000|5000|
89
 
90
- ## [Dataset Creation](#dataset-creation)
91
 
92
- ### [Curation Rationale](#curation-rationale)
93
 
94
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
 
96
- ### [Source Data](#source-data)
97
 
98
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
 
100
- ### [Annotations](#annotations)
101
 
102
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
 
104
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
105
 
106
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
 
108
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
109
 
110
- ### [Social Impact of Dataset](#social-impact-of-dataset)
111
 
112
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
 
114
- ### [Discussion of Biases](#discussion-of-biases)
115
 
116
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
 
118
- ### [Other Known Limitations](#other-known-limitations)
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
- ## [Additional Information](#additional-information)
123
 
124
- ### [Dataset Curators](#dataset-curators)
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
- ### [Licensing Information](#licensing-information)
129
 
130
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
131
 
132
- ### [Citation Information](#citation-information)
133
 
134
  ```
135
  @InProceedings{BothaEtAl2018,
 
27
  - [Citation Information](#citation-information)
28
  - [Contributions](#contributions)
29
 
30
+ ## Dataset Description
31
 
32
  - **Homepage:** [https://dataset-homepage/](https://dataset-homepage/)
33
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
37
  - **Size of the generated dataset:** 370.41 MB
38
  - **Total amount of disk used:** 466.04 MB
39
 
40
+ ### Dataset Summary
41
 
42
  One million English sentences, each split into two sentences that together preserve the original meaning, extracted from Wikipedia
43
  Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
44
  the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
45
 
46
+ ### Supported Tasks
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
50
+ ### Languages
51
 
52
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
 
54
+ ## Dataset Structure
55
 
56
  We show detailed information for up to 5 configurations of the dataset.
57
 
58
+ ### Data Instances
59
 
60
  #### default
61
 
 
72
  }
73
  ```
74
 
75
+ ### Data Fields
76
 
77
  The data fields are the same among all splits.
78
 
 
81
  - `simple_sentence_1`: a `string` feature.
82
  - `simple_sentence_2`: a `string` feature.
83
 
84
+ ### Data Splits Sample Size
85
 
86
  | name |train |validation|test|
87
  |-------|-----:|---------:|---:|
88
  |default|989944| 5000|5000|
89
 
90
+ ## Dataset Creation
91
 
92
+ ### Curation Rationale
93
 
94
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
 
96
+ ### Source Data
97
 
98
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
 
100
+ ### Annotations
101
 
102
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
 
104
+ ### Personal and Sensitive Information
105
 
106
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
 
108
+ ## Considerations for Using the Data
109
 
110
+ ### Social Impact of Dataset
111
 
112
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
 
114
+ ### Discussion of Biases
115
 
116
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
 
118
+ ### Other Known Limitations
119
 
120
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
 
122
+ ## Additional Information
123
 
124
+ ### Dataset Curators
125
 
126
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
 
128
+ ### Licensing Information
129
 
130
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
131
 
132
+ ### Citation Information
133
 
134
  ```
135
  @InProceedings{BothaEtAl2018,