Make the dataset streamable

#3
by lhoestq HF staff - opened
.gitattributes CHANGED
@@ -25,3 +25,24 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ urlsf_subset00.tar filter=lfs diff=lfs merge=lfs -text
29
+ urlsf_subset01.tar filter=lfs diff=lfs merge=lfs -text
30
+ urlsf_subset02.tar filter=lfs diff=lfs merge=lfs -text
31
+ urlsf_subset03.tar filter=lfs diff=lfs merge=lfs -text
32
+ urlsf_subset04.tar filter=lfs diff=lfs merge=lfs -text
33
+ urlsf_subset05.tar filter=lfs diff=lfs merge=lfs -text
34
+ urlsf_subset06.tar filter=lfs diff=lfs merge=lfs -text
35
+ urlsf_subset07.tar filter=lfs diff=lfs merge=lfs -text
36
+ urlsf_subset08.tar filter=lfs diff=lfs merge=lfs -text
37
+ urlsf_subset09.tar filter=lfs diff=lfs merge=lfs -text
38
+ urlsf_subset10.tar filter=lfs diff=lfs merge=lfs -text
39
+ urlsf_subset11.tar filter=lfs diff=lfs merge=lfs -text
40
+ urlsf_subset12.tar filter=lfs diff=lfs merge=lfs -text
41
+ urlsf_subset13.tar filter=lfs diff=lfs merge=lfs -text
42
+ urlsf_subset14.tar filter=lfs diff=lfs merge=lfs -text
43
+ urlsf_subset15.tar filter=lfs diff=lfs merge=lfs -text
44
+ urlsf_subset16.tar filter=lfs diff=lfs merge=lfs -text
45
+ urlsf_subset17.tar filter=lfs diff=lfs merge=lfs -text
46
+ urlsf_subset18.tar filter=lfs diff=lfs merge=lfs -text
47
+ urlsf_subset19.tar filter=lfs diff=lfs merge=lfs -text
48
+ urlsf_subset20.tar filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -28,10 +28,10 @@ dataset_info:
28
  config_name: plain_text
29
  splits:
30
  - name: train
31
- num_bytes: 39769494896
32
  num_examples: 8013769
33
- download_size: 12880027468
34
- dataset_size: 39769494896
35
  ---
36
 
37
  # Dataset Card for "openwebtext"
@@ -72,7 +72,9 @@ dataset_info:
72
 
73
  ### Dataset Summary
74
 
75
- An open-source replication of the WebText dataset from OpenAI.
 
 
76
 
77
  ### Supported Tasks and Leaderboards
78
 
@@ -124,7 +126,9 @@ The data fields are the same among all splits.
124
 
125
  #### Initial Data Collection and Normalization
126
 
127
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
128
 
129
  #### Who are the source language producers?
130
 
@@ -132,13 +136,7 @@ The data fields are the same among all splits.
132
 
133
  ### Annotations
134
 
135
- #### Annotation process
136
-
137
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
138
-
139
- #### Who are the annotators?
140
-
141
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
142
 
143
  ### Personal and Sensitive Information
144
 
@@ -166,7 +164,30 @@ The data fields are the same among all splits.
166
 
167
  ### Licensing Information
168
 
169
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
170
 
171
  ### Citation Information
172
 
@@ -181,4 +202,4 @@ The data fields are the same among all splits.
181
 
182
  ### Contributions
183
 
184
- Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
 
28
  config_name: plain_text
29
  splits:
30
  - name: train
31
+ num_bytes: 39769491688
32
  num_examples: 8013769
33
+ download_size: 12880189440
34
+ dataset_size: 39769491688
35
  ---
36
 
37
  # Dataset Card for "openwebtext"
 
72
 
73
  ### Dataset Summary
74
 
75
+ An open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2.
76
+
77
+ This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
78
 
79
  ### Supported Tasks and Leaderboards
80
 
 
126
 
127
  #### Initial Data Collection and Normalization
128
 
129
+ The authors started by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Using Facebook FastText, non-English web pages were filtered out.
130
+
131
+ Subsequently, near-duplicate documents were identified using local-sensitivity hashing (LSH). Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
132
 
133
  #### Who are the source language producers?
134
 
 
136
 
137
  ### Annotations
138
 
139
+ The dataset doesn't contain annotations.
 
 
 
 
 
 
140
 
141
  ### Personal and Sensitive Information
142
 
 
164
 
165
  ### Licensing Information
166
 
167
+ These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
168
+
169
+ ```
170
+ We do not own any of the text from which these data has been extracted.
171
+
172
+ We license the actual packaging of these parallel data under the [Creative Commons CC0 license (“no rights reserved”)](https://creativecommons.org/share-your-work/public-domain/cc0/)
173
+ ```
174
+
175
+ #### Notice policy
176
+
177
+ Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
178
+
179
+ Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
180
+
181
+ Clearly identify the copyrighted work claimed to be infringed.
182
+
183
+ Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
184
+
185
+ And contact us at the following email address: openwebtext at gmail.com and datasets at huggingface.co
186
+
187
+ #### Take down policy
188
+
189
+ The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
190
+ Hugging Face will also update this repository accordingly.
191
 
192
  ### Citation Information
193
 
 
202
 
203
  ### Contributions
204
 
205
+ Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"plain_text": {"description": "An open-source replication of the WebText dataset from OpenAI.\n", "citation": "@misc{Gokaslan2019OpenWeb,\n title={OpenWebText Corpus},\n author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},\n howpublished{\\url{http://Skylion007.github.io/OpenWebTextCorpus}},\n year={2019}\n}\n", "homepage": "https://skylion007.github.io/OpenWebTextCorpus/", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "openwebtext", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 39769494896, "num_examples": 8013769, "dataset_name": "openwebtext"}}, "download_checksums": {"https://zenodo.org/record/3834942/files/openwebtext.tar.xz": {"num_bytes": 12880027468, "checksum": "9fe39d154c5bc67da8c359415372b79510eb1e2edb0d035fe4f7fc3a732b9336"}}, "download_size": 12880027468, "post_processing_size": null, "dataset_size": 39769494896, "size_in_bytes": 52649522364}}
 
 
openwebtext.py CHANGED
@@ -14,10 +14,7 @@
14
  # limitations under the License.
15
  """The Open WebText Corpus"""
16
 
17
-
18
- import os
19
  import re
20
- from itertools import chain
21
 
22
  import datasets
23
 
@@ -35,7 +32,8 @@ _DESCRIPTION = """\
35
  An open-source replication of the WebText dataset from OpenAI.
36
  """
37
 
38
- _URL = "https://zenodo.org/record/3834942/files/openwebtext.tar.xz"
 
39
 
40
 
41
  class Openwebtext(datasets.GeneratorBasedBuilder):
@@ -58,29 +56,24 @@ class Openwebtext(datasets.GeneratorBasedBuilder):
58
  )
59
 
60
  def _split_generators(self, dl_manager):
61
- dl_dir = dl_manager.download_and_extract(_URL)
62
- owt_dir = os.path.join(dl_dir, "openwebtext")
63
- subset_xzs = [
64
- os.path.join(owt_dir, file_name)
65
- for file_name in sorted(os.listdir(owt_dir))
66
- if file_name.endswith("xz") # filter out ...xz.lock
67
- ]
68
- ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count() * 0.75))
69
- nested_txt_files = [
70
- [
71
- os.path.join(ex_dir, txt_file_name)
72
- for txt_file_name in sorted(os.listdir(ex_dir))
73
- if txt_file_name.endswith("txt")
74
- ]
75
- for ex_dir in ex_dirs
76
- ]
77
- txt_files = chain(*nested_txt_files)
78
  return [
79
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files}),
 
 
 
 
 
80
  ]
81
 
82
- def _generate_examples(self, txt_files):
83
  """Yields examples."""
84
- for idx, filepath in enumerate(txt_files):
85
- with open(filepath, encoding="utf-8") as f:
86
- yield idx, {"text": re.sub("\n\n\n+", "\n\n", f.read()).strip()}
 
 
 
 
 
 
 
14
  # limitations under the License.
15
  """The Open WebText Corpus"""
16
 
 
 
17
  import re
 
18
 
19
  import datasets
20
 
 
32
  An open-source replication of the WebText dataset from OpenAI.
33
  """
34
 
35
+ _N_DATA_FILES = 21
36
+ _DATA_FILES = ["subsets/urlsf_subset{:02d}.tar".format(i) for i in range(_N_DATA_FILES)]
37
 
38
 
39
  class Openwebtext(datasets.GeneratorBasedBuilder):
 
56
  )
57
 
58
  def _split_generators(self, dl_manager):
59
+ archives = dl_manager.download(_DATA_FILES)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  return [
61
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={
62
+ "archive_iterators": [
63
+ dl_manager.iter_archive(archive) for archive in archives
64
+ ],
65
+ "iter_archive": dl_manager.iter_archive
66
+ }),
67
  ]
68
 
69
+ def _generate_examples(self, archive_iterators, iter_archive):
70
  """Yields examples."""
71
+ for archive_iterator in archive_iterators:
72
+ for xz_filepath, xz_f in archive_iterator:
73
+ if not xz_filepath.endswith(".xz"):
74
+ continue
75
+ for txt_filepath, txt_f in iter_archive(xz_f):
76
+ if not txt_filepath.endswith(".txt"):
77
+ continue
78
+ idx = f"{xz_filepath}/{txt_filepath}"
79
+ yield idx, {"text": re.sub("\n\n\n+", "\n\n", txt_f.read().decode("utf-8")).strip()}
subsets/urlsf_subset00.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cba8c76f8a27ea1d0d1430cfed59a84dc812e29d0e999d10305bc49848cd7127
3
+ size 633047040
subsets/urlsf_subset01.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83891be18f14e9e2e7c7e7604c67f8ac44571f71bec34a6d57fe76e4bfe77c4c
3
+ size 628838400
subsets/urlsf_subset02.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f6bfb3408b3aec9d22ef18ac84ee0fe34c8555366f9074a733dfa68d9ebf2d1
3
+ size 629125120
subsets/urlsf_subset03.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69c91c02d754bfca9f8d9bddec43c5cac5cc728a62e09fd1a6ad66bef68ea900
3
+ size 627578880
subsets/urlsf_subset04.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e9f206f3ab4d533a0f7a0d240b872d47a2b602d77ca09743fa02e5ee5ca31b1
3
+ size 627189760
subsets/urlsf_subset05.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49801cdaf398870eabd7225fce67d21afdd9923611c6fa16350ab39ed3ed5454
3
+ size 630200320
subsets/urlsf_subset06.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78129d2aebbe5991a2b3b7568019aa080d56bef2bcd0571f3dde460f74ea8d7a
3
+ size 625612800
subsets/urlsf_subset07.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d78f0c0d2890a20fa1f9f00d92959e1c450fe6b8c879e0596cccad9bc7aba17f
3
+ size 625356800
subsets/urlsf_subset08.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17fecd74d654807b09d2b3328b8b71c050f2879bbe0f3514874be7f2fd3732ed
3
+ size 624629760
subsets/urlsf_subset09.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65368771304487ba42d432d281d31e061679ceef5d05a4d0a9b3a0fae4add14f
3
+ size 625807360
subsets/urlsf_subset10.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d06538c4ca1a1558c2d9305b86d853908c4ba5937e93e1ac74e57c32dfca8154
3
+ size 625172480
subsets/urlsf_subset11.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:815dd5bfd712ff74465ad96114bbcd93a0e71e9b170aa06689e22d6f3b83f90d
3
+ size 625264640
subsets/urlsf_subset12.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87619da8dfea1e843ff9644a08c52ce64bcb269383e1d22ac77483b65a5609cb
3
+ size 624445440
subsets/urlsf_subset13.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db495f0255105e7515abab02bfe52d32b966159a86d3ce543656a0b768a59239
3
+ size 628961280
subsets/urlsf_subset14.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae7dd7daa4fb6a4431115ded537569dbb521778d345a208f42184c9200f63352
3
+ size 626708480
subsets/urlsf_subset15.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58c0aa0d3bd4139f9224b8f1b49577fe534d403afb825db92042fe6f5c5211b9
3
+ size 620666880
subsets/urlsf_subset16.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0d3f20835741ce54d28577793c1f9adbd22d9c6593db55651d6f380c121e801
3
+ size 618752000
subsets/urlsf_subset17.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0c20956c68bcba2a6cff7724db1a7c305c64a54f55ca5612f026beee51baa66
3
+ size 619141120
subsets/urlsf_subset18.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8970951bd8815ac18a1a29c0618847a4384b0803fe1e85e9a22606e63cf135d
3
+ size 617789440
subsets/urlsf_subset19.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfd91671633113ac5246334e217560ac57facb7e9353df5a56724c50a4e5c976
3
+ size 619192320
subsets/urlsf_subset20.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bc38f227abde149dbaee01e24eace67b534c1392b90655195f650b2b979ba52
3
+ size 376709120