File size: 648 Bytes
5541b22 e42e968 5541b22 dabe901 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 9828026640.785877
num_examples: 6699666
- name: dev
num_bytes: 146694277.60706097
num_examples: 100000
- name: test
num_bytes: 146694277.60706097
num_examples: 100000
download_size: 6454536577
dataset_size: 10121415196
language:
- en
pretty_name: Wikipedia preprocessed for 512 tokens pretraining.
size_categories:
- 1M<n<10M
---
# Dataset Card for "wikipedia_512_pretraining"
Wikipedia preprocessed for pretraining of models. Each sample in the dataset has an average tokenized length of 512 `RoBERTa-Base` tokens. |