metadata
language:
- en
paperswithcode_id: wiki-40b
pretty_name: Wiki-40B
Dataset Card for "wiki40b"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://research.google/pubs/pub49029/
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 9988.05 MB
- Total amount of disk used: 9988.05 MB
Dataset Summary
Clean-up text for 40+ Wikipedia languages editions of pages correspond to entities. The datasets have train/dev/test splits per language. The dataset is cleaned up by page filtering to remove disambiguation pages, redirect pages, deleted pages, and non-entity pages. Each example contains the wikidata id of the entity, and the full Wikipedia article after page processing that removes non-content sections and structured objects.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
en
- Size of downloaded dataset files: 0.00 MB
- Size of the generated dataset: 9988.05 MB
- Total amount of disk used: 9988.05 MB
An example of 'train' looks as follows.
Data Fields
The data fields are the same among all splits.
en
wikidata_id
: astring
feature.text
: astring
feature.version_id
: astring
feature.
Data Splits
name | train | validation | test |
---|---|---|---|
en | 2926536 | 163597 | 162274 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
Contributions
Thanks to @jplu, @patrickvonplaten, @thomwolf, @albertvillanova, @lhoestq for adding this dataset.