system HF staff commited on
Commit
46a7d29
1 Parent(s): 69b3b22

Update files from the datasets library (from 1.4.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.4.0

Files changed (2) hide show
  1. README.md +21 -21
  2. wikipedia.py +5 -3
README.md CHANGED
@@ -39,7 +39,7 @@
39
  - [Citation Information](#citation-information)
40
  - [Contributions](#contributions)
41
 
42
- ## [Dataset Description](#dataset-description)
43
 
44
  - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
45
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
@@ -49,7 +49,7 @@
49
  - **Size of the generated dataset:** 35376.35 MB
50
  - **Total amount of disk used:** 66115.60 MB
51
 
52
- ### [Dataset Summary](#dataset-summary)
53
 
54
  Wikipedia dataset containing cleaned articles of all languages.
55
  The datasets are built from the Wikipedia dump
@@ -57,19 +57,19 @@ The datasets are built from the Wikipedia dump
57
  contains the content of one full Wikipedia article with cleaning to strip
58
  markdown and unwanted sections (references, etc.).
59
 
60
- ### [Supported Tasks](#supported-tasks)
61
 
62
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
 
64
- ### [Languages](#languages)
65
 
66
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
 
68
- ## [Dataset Structure](#dataset-structure)
69
 
70
  We show detailed information for up to 5 configurations of the dataset.
71
 
72
- ### [Data Instances](#data-instances)
73
 
74
  #### 20200501.de
75
 
@@ -126,7 +126,7 @@ An example of 'train' looks as follows.
126
 
127
  ```
128
 
129
- ### [Data Fields](#data-fields)
130
 
131
  The data fields are the same among all splits.
132
 
@@ -150,7 +150,7 @@ The data fields are the same among all splits.
150
  - `title`: a `string` feature.
151
  - `text`: a `string` feature.
152
 
153
- ### [Data Splits Sample Size](#data-splits-sample-size)
154
 
155
  | name | train |
156
  |------------|------:|
@@ -160,49 +160,49 @@ The data fields are the same among all splits.
160
  |20200501.frr| 11803|
161
  |20200501.it |1931197|
162
 
163
- ## [Dataset Creation](#dataset-creation)
164
 
165
- ### [Curation Rationale](#curation-rationale)
166
 
167
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
 
169
- ### [Source Data](#source-data)
170
 
171
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
 
173
- ### [Annotations](#annotations)
174
 
175
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
 
177
- ### [Personal and Sensitive Information](#personal-and-sensitive-information)
178
 
179
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
 
181
- ## [Considerations for Using the Data](#considerations-for-using-the-data)
182
 
183
- ### [Social Impact of Dataset](#social-impact-of-dataset)
184
 
185
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
 
187
- ### [Discussion of Biases](#discussion-of-biases)
188
 
189
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
 
191
- ### [Other Known Limitations](#other-known-limitations)
192
 
193
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
 
195
- ## [Additional Information](#additional-information)
196
 
197
- ### [Dataset Curators](#dataset-curators)
198
 
199
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
 
201
- ### [Licensing Information](#licensing-information)
202
 
203
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
 
205
- ### [Citation Information](#citation-information)
206
 
207
  ```
208
  @ONLINE {wikidump,
 
39
  - [Citation Information](#citation-information)
40
  - [Contributions](#contributions)
41
 
42
+ ## Dataset Description
43
 
44
  - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
45
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
49
  - **Size of the generated dataset:** 35376.35 MB
50
  - **Total amount of disk used:** 66115.60 MB
51
 
52
+ ### Dataset Summary
53
 
54
  Wikipedia dataset containing cleaned articles of all languages.
55
  The datasets are built from the Wikipedia dump
 
57
  contains the content of one full Wikipedia article with cleaning to strip
58
  markdown and unwanted sections (references, etc.).
59
 
60
+ ### Supported Tasks
61
 
62
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
63
 
64
+ ### Languages
65
 
66
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
67
 
68
+ ## Dataset Structure
69
 
70
  We show detailed information for up to 5 configurations of the dataset.
71
 
72
+ ### Data Instances
73
 
74
  #### 20200501.de
75
 
 
126
 
127
  ```
128
 
129
+ ### Data Fields
130
 
131
  The data fields are the same among all splits.
132
 
 
150
  - `title`: a `string` feature.
151
  - `text`: a `string` feature.
152
 
153
+ ### Data Splits Sample Size
154
 
155
  | name | train |
156
  |------------|------:|
 
160
  |20200501.frr| 11803|
161
  |20200501.it |1931197|
162
 
163
+ ## Dataset Creation
164
 
165
+ ### Curation Rationale
166
 
167
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
168
 
169
+ ### Source Data
170
 
171
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
 
173
+ ### Annotations
174
 
175
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
 
177
+ ### Personal and Sensitive Information
178
 
179
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
 
181
+ ## Considerations for Using the Data
182
 
183
+ ### Social Impact of Dataset
184
 
185
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
 
187
+ ### Discussion of Biases
188
 
189
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
 
191
+ ### Other Known Limitations
192
 
193
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
194
 
195
+ ## Additional Information
196
 
197
+ ### Dataset Curators
198
 
199
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
 
201
+ ### Licensing Information
202
 
203
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
204
 
205
+ ### Citation Information
206
 
207
  ```
208
  @ONLINE {wikidump,
wikipedia.py CHANGED
@@ -20,7 +20,6 @@ from __future__ import absolute_import, division, print_function
20
 
21
  import codecs
22
  import json
23
- import logging
24
  import re
25
  import xml.etree.cElementTree as etree
26
 
@@ -29,6 +28,9 @@ import six
29
  import datasets
30
 
31
 
 
 
 
32
  if six.PY3:
33
  import bz2 # pylint:disable=g-import-not-at-top
34
  else:
@@ -461,7 +463,7 @@ class Wikipedia(datasets.BeamBasedBuilder):
461
 
462
  def _extract_content(filepath):
463
  """Extracts article content from a single WikiMedia XML file."""
464
- logging.info("generating examples from = %s", filepath)
465
  with beam.io.filesystems.FileSystems.open(filepath) as f:
466
  f = bz2.BZ2File(filename=f)
467
  if six.PY3:
@@ -506,7 +508,7 @@ class Wikipedia(datasets.BeamBasedBuilder):
506
  text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)
507
  except (mwparserfromhell.parser.ParserError) as e:
508
  beam.metrics.Metrics.counter(language, "parser-error").inc()
509
- logging.error("mwparserfromhell ParseError: %s", e)
510
  return
511
 
512
  if not text:
 
20
 
21
  import codecs
22
  import json
 
23
  import re
24
  import xml.etree.cElementTree as etree
25
 
 
28
  import datasets
29
 
30
 
31
+ logger = datasets.logging.get_logger(__name__)
32
+
33
+
34
  if six.PY3:
35
  import bz2 # pylint:disable=g-import-not-at-top
36
  else:
 
463
 
464
  def _extract_content(filepath):
465
  """Extracts article content from a single WikiMedia XML file."""
466
+ logger.info("generating examples from = %s", filepath)
467
  with beam.io.filesystems.FileSystems.open(filepath) as f:
468
  f = bz2.BZ2File(filename=f)
469
  if six.PY3:
 
508
  text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)
509
  except (mwparserfromhell.parser.ParserError) as e:
510
  beam.metrics.Metrics.counter(language, "parser-error").inc()
511
+ logger.error("mwparserfromhell ParseError: %s", e)
512
  return
513
 
514
  if not text: