add licensing, notice and take down policy
Browse files
README.md
CHANGED
@@ -72,7 +72,9 @@ dataset_info:
|
|
72 |
|
73 |
### Dataset Summary
|
74 |
|
75 |
-
An open-source replication of the WebText dataset from OpenAI.
|
|
|
|
|
76 |
|
77 |
### Supported Tasks and Leaderboards
|
78 |
|
@@ -124,7 +126,9 @@ The data fields are the same among all splits.
|
|
124 |
|
125 |
#### Initial Data Collection and Normalization
|
126 |
|
127 |
-
|
|
|
|
|
128 |
|
129 |
#### Who are the source language producers?
|
130 |
|
@@ -132,13 +136,7 @@ The data fields are the same among all splits.
|
|
132 |
|
133 |
### Annotations
|
134 |
|
135 |
-
|
136 |
-
|
137 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
138 |
-
|
139 |
-
#### Who are the annotators?
|
140 |
-
|
141 |
-
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
142 |
|
143 |
### Personal and Sensitive Information
|
144 |
|
@@ -166,7 +164,30 @@ The data fields are the same among all splits.
|
|
166 |
|
167 |
### Licensing Information
|
168 |
|
169 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
### Citation Information
|
172 |
|
@@ -181,4 +202,4 @@ The data fields are the same among all splits.
|
|
181 |
|
182 |
### Contributions
|
183 |
|
184 |
-
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
|
|
|
72 |
|
73 |
### Dataset Summary
|
74 |
|
75 |
+
An open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2.
|
76 |
+
|
77 |
+
This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
|
78 |
|
79 |
### Supported Tasks and Leaderboards
|
80 |
|
|
|
126 |
|
127 |
#### Initial Data Collection and Normalization
|
128 |
|
129 |
+
The authors started by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Using Facebook FastText, non-English web pages were filtered out.
|
130 |
+
|
131 |
+
Subsequently, near-duplicate documents were identified using local-sensitivity hashing (LSH). Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
|
132 |
|
133 |
#### Who are the source language producers?
|
134 |
|
|
|
136 |
|
137 |
### Annotations
|
138 |
|
139 |
+
The dataset doesn't contain annotations.
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
141 |
### Personal and Sensitive Information
|
142 |
|
|
|
164 |
|
165 |
### Licensing Information
|
166 |
|
167 |
+
These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
|
168 |
+
|
169 |
+
```
|
170 |
+
We do not own any of the text from which these data has been extracted.
|
171 |
+
|
172 |
+
We license the actual packaging of these parallel data under the [Creative Commons CC0 license (“no rights reserved”)](https://creativecommons.org/share-your-work/public-domain/cc0/)
|
173 |
+
```
|
174 |
+
|
175 |
+
#### Notice policy
|
176 |
+
|
177 |
+
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
|
178 |
+
|
179 |
+
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
|
180 |
+
|
181 |
+
Clearly identify the copyrighted work claimed to be infringed.
|
182 |
+
|
183 |
+
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
|
184 |
+
|
185 |
+
And contact us at the following email address: openwebtext at gmail.com and datasets at huggingface.co
|
186 |
+
|
187 |
+
#### Take down policy
|
188 |
+
|
189 |
+
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
|
190 |
+
Hugging Face will also update this repository accordingly.
|
191 |
|
192 |
### Citation Information
|
193 |
|
|
|
202 |
|
203 |
### Contributions
|
204 |
|
205 |
+
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
|