Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
system HF staff commited on
Commit
a7ad8d8
1 Parent(s): 3541efa

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  ---
3
 
4
  # Dataset Card for "wikitext"
@@ -6,12 +7,12 @@
6
  ## Table of Contents
7
  - [Dataset Description](#dataset-description)
8
  - [Dataset Summary](#dataset-summary)
9
- - [Supported Tasks](#supported-tasks)
10
  - [Languages](#languages)
11
  - [Dataset Structure](#dataset-structure)
12
  - [Data Instances](#data-instances)
13
  - [Data Fields](#data-fields)
14
- - [Data Splits Sample Size](#data-splits-sample-size)
15
  - [Dataset Creation](#dataset-creation)
16
  - [Curation Rationale](#curation-rationale)
17
  - [Source Data](#source-data)
@@ -42,7 +43,7 @@
42
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
43
  Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
44
 
45
- ### Supported Tasks
46
 
47
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
48
 
@@ -132,7 +133,7 @@ The data fields are the same among all splits.
132
  #### wikitext-2-v1
133
  - `text`: a `string` feature.
134
 
135
- ### Data Splits Sample Size
136
 
137
  | name | train |validation|test|
138
  |-------------------|------:|---------:|---:|
@@ -149,10 +150,22 @@ The data fields are the same among all splits.
149
 
150
  ### Source Data
151
 
 
 
 
 
 
 
152
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
 
154
  ### Annotations
155
 
 
 
 
 
 
 
156
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
 
158
  ### Personal and Sensitive Information
 
1
  ---
2
+ paperswithcode_id: wikitext-2
3
  ---
4
 
5
  # Dataset Card for "wikitext"
 
7
  ## Table of Contents
8
  - [Dataset Description](#dataset-description)
9
  - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
  - [Languages](#languages)
12
  - [Dataset Structure](#dataset-structure)
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
  - [Dataset Creation](#dataset-creation)
17
  - [Curation Rationale](#curation-rationale)
18
  - [Source Data](#source-data)
 
43
  The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified
44
  Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.
45
 
46
+ ### Supported Tasks and Leaderboards
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
 
133
  #### wikitext-2-v1
134
  - `text`: a `string` feature.
135
 
136
+ ### Data Splits
137
 
138
  | name | train |validation|test|
139
  |-------------------|------:|---------:|---:|
 
150
 
151
  ### Source Data
152
 
153
+ #### Initial Data Collection and Normalization
154
+
155
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
+
157
+ #### Who are the source language producers?
158
+
159
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
160
 
161
  ### Annotations
162
 
163
+ #### Annotation process
164
+
165
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
+
167
+ #### Who are the annotators?
168
+
169
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
170
 
171
  ### Personal and Sensitive Information