|
--- |
|
configs: |
|
- config_name: all |
|
data_files: |
|
- split: train |
|
path: |
|
- "data/ArXiv/train/*.arrow" |
|
- "data/BookCorpus2/train/*.arrow" |
|
- "data/Books3/train/*.arrow" |
|
- "data/DM Mathematics/train/*.arrow" |
|
- "data/Enron Emails/train/*.arrow" |
|
- "data/EuroParl/train/*.arrow" |
|
- "data/FreeLaw/train/*.arrow" |
|
- "data/Github/train/*.arrow" |
|
- "data/Gutenberg (PG-19)/train/*.arrow" |
|
- "data/HackerNews/train/*.arrow" |
|
- "data/NIH ExPorter/train/*.arrow" |
|
- "data/OpenSubtitles/train/*.arrow" |
|
- "data/OpenWebText2/train/*.arrow" |
|
- "data/PhilPapers/train/*.arrow" |
|
- "data/Pile-CC/train/*.arrow" |
|
- "data/PubMed Abstracts/train/*.arrow" |
|
- "data/PubMed Central/train/*.arrow" |
|
- "data/StackExchange/train/*.arrow" |
|
- "data/UPSTO Backgrounds/train/*.arrow" |
|
- "data/Ubuntu IRC/train/*.arrow" |
|
- "data/Wikipedia (en)/train/*.arrow" |
|
- "data/YoutubeSubtitles/train/*.arrow" |
|
- split: test |
|
path: |
|
- "data/ArXiv/test/*.arrow" |
|
- "data/BookCorpus2/test/*.arrow" |
|
- "data/Books3/test/*.arrow" |
|
- "data/DM Mathematics/test/*.arrow" |
|
- "data/Enron Emails/test/*.arrow" |
|
- "data/EuroParl/test/*.arrow" |
|
- "data/FreeLaw/test/*.arrow" |
|
- "data/Github/test/*.arrow" |
|
- "data/Gutenberg (PG-19)/test/*.arrow" |
|
- "data/HackerNews/test/*.arrow" |
|
- "data/NIH ExPorter/test/*.arrow" |
|
- "data/OpenSubtitles/test/*.arrow" |
|
- "data/OpenWebText2/test/*.arrow" |
|
- "data/PhilPapers/test/*.arrow" |
|
- "data/Pile-CC/test/*.arrow" |
|
- "data/PubMed Abstracts/test/*.arrow" |
|
- "data/PubMed Central/test/*.arrow" |
|
- "data/StackExchange/test/*.arrow" |
|
- "data/UPSTO Backgrounds/test/*.arrow" |
|
- "data/Ubuntu IRC/test/*.arrow" |
|
- "data/Wikipedia (en)/test/*.arrow" |
|
- "data/YoutubeSubtitles/test/*.arrow" |
|
default: true |
|
- config_name: ArXiv |
|
data_files: |
|
- split: train |
|
path: "data/ArXiv/train/*.arrow" |
|
- split: test |
|
path: "data/ArXiv/test/*.arrow" |
|
- config_name: BookCorpus2 |
|
data_files: |
|
- split: train |
|
path: "data/BookCorpus2/train/*.arrow" |
|
- split: test |
|
path: "data/BookCorpus2/test/*.arrow" |
|
- config_name: Books3 |
|
data_files: |
|
- split: train |
|
path: "data/Books3/train/*.arrow" |
|
- split: test |
|
path: "data/Books3/test/*.arrow" |
|
- config_name: DM Mathematics |
|
data_files: |
|
- split: train |
|
path: "data/DM Mathematics/train/*.arrow" |
|
- split: test |
|
path: "data/DM Mathematics/test/*.arrow" |
|
- config_name: Enron Emails |
|
data_files: |
|
- split: train |
|
path: "data/Enron Emails/train/*.arrow" |
|
- split: test |
|
path: "data/Enron Emails/test/*.arrow" |
|
- config_name: EuroParl |
|
data_files: |
|
- split: train |
|
path: "data/EuroParl/train/*.arrow" |
|
- split: test |
|
path: "data/EuroParl/test/*.arrow" |
|
- config_name: FreeLaw |
|
data_files: |
|
- split: train |
|
path: "data/FreeLaw/train/*.arrow" |
|
- split: test |
|
path: "data/FreeLaw/test/*.arrow" |
|
- config_name: Github |
|
data_files: |
|
- split: train |
|
path: "data/Github/train/*.arrow" |
|
- split: test |
|
path: "data/Github/test/*.arrow" |
|
- config_name: Gutenberg (PG-19) |
|
data_files: |
|
- split: train |
|
path: "data/Gutenberg (PG-19)/train/*.arrow" |
|
- split: test |
|
path: "data/Gutenberg (PG-19)/test/*.arrow" |
|
- config_name: HackerNews |
|
data_files: |
|
- split: train |
|
path: "data/HackerNews/train/*.arrow" |
|
- split: test |
|
path: "data/HackerNews/test/*.arrow" |
|
- config_name: NIH ExPorter |
|
data_files: |
|
- split: train |
|
path: "data/NIH ExPorter/train/*.arrow" |
|
- split: test |
|
path: "data/NIH ExPorter/test/*.arrow" |
|
- config_name: OpenSubtitles |
|
data_files: |
|
- split: train |
|
path: "data/OpenSubtitles/train/*.arrow" |
|
- split: test |
|
path: "data/OpenSubtitles/test/*.arrow" |
|
- config_name: OpenWebText2 |
|
data_files: |
|
- split: train |
|
path: "data/OpenWebText2/train/*.arrow" |
|
- split: test |
|
path: "data/OpenWebText2/test/*.arrow" |
|
- config_name: PhilPapers |
|
data_files: |
|
- split: train |
|
path: "data/PhilPapers/train/*.arrow" |
|
- split: test |
|
path: "data/PhilPapers/test/*.arrow" |
|
- config_name: Pile-CC |
|
data_files: |
|
- split: train |
|
path: "data/Pile-CC/train/*.arrow" |
|
- split: test |
|
path: "data/Pile-CC/test/*.arrow" |
|
- config_name: PubMed Abstracts |
|
data_files: |
|
- split: train |
|
path: "data/PubMed Abstracts/train/*.arrow" |
|
- split: test |
|
path: "data/PubMed Abstracts/test/*.arrow" |
|
- config_name: PubMed Central |
|
data_files: |
|
- split: train |
|
path: "data/PubMed Central/train/*.arrow" |
|
- split: test |
|
path: "data/PubMed Central/test/*.arrow" |
|
- config_name: StackExchange |
|
data_files: |
|
- split: train |
|
path: "data/StackExchange/train/*.arrow" |
|
- split: test |
|
path: "data/StackExchange/test/*.arrow" |
|
- config_name: UPSTO Backgrounds |
|
data_files: |
|
- split: train |
|
path: "data/UPSTO Backgrounds/train/*.arrow" |
|
- split: test |
|
path: "data/UPSTO Backgrounds/test/*.arrow" |
|
- config_name: Ubuntu IRC |
|
data_files: |
|
- split: train |
|
path: "data/Ubuntu IRC/train/*.arrow" |
|
- split: test |
|
path: "data/Ubuntu IRC/test/*.arrow" |
|
- config_name: Wikipedia (en) |
|
data_files: |
|
- split: train |
|
path: "data/Wikipedia (en)/train/*.arrow" |
|
- split: test |
|
path: "data/Wikipedia (en)/test/*.arrow" |
|
- config_name: YoutubeSubtitles |
|
data_files: |
|
- split: train |
|
path: "data/YoutubeSubtitles/train/*.arrow" |
|
- split: test |
|
path: "data/YoutubeSubtitles/test/*.arrow" |
|
--- |
|
|
|
# Dataset description |
|
|
|
[The pile](https://arxiv.org/abs/2101.00027) is an 800GB dataset of english text |
|
designed by EleutherAI to train large-scale language models. The original version of |
|
the dataset can be found [here](https://huggingface.co/datasets/EleutherAI/pile). |
|
|
|
The dataset is divided into 22 smaller high-quality datasets. For more information |
|
each of them, please refer to [the datasheet for the pile](https://arxiv.org/abs/2201.07311). |
|
|
|
However, the current version of the dataset, available on the Hub, is not splitted accordingly. |
|
We had to solve this problem in order to improve the user experience when it comes to deal with |
|
the pile via the hub. |
|
|
|
Here is an instance of the pile |
|
|
|
``` |
|
{ |
|
'meta': {'pile_set_name': 'Pile-CC'}, |
|
'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...' |
|
} |
|
``` |
|
|
|
We used the `meta` column to properly divide the dataset in subsets. Each instance `example` belongs to the subset |
|
`domain` and `domain = example['meta']['pile_set_name']`. By doing this, we were able to create a [new version of the pile](https://huggingface.co/datasets/ArmelR/sharded-pile) |
|
that is properly divided, each instance having a new column `domain`. |
|
|
|
We further splitted each subset in train/test (97%/3%) to build the current dataset which the following structure |
|
|
|
``` |
|
data |
|
ArXiv |
|
train |
|
test |
|
BookCorpus2 |
|
train |
|
test |
|
Books3 |
|
train |
|
test |
|
``` |
|
|
|
# Usage |
|
|
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset( |
|
"ArmelR/the-pile-splitted", |
|
subset_of_interest, |
|
num_proc=8 |
|
) |
|
``` |
|
Using `subset_of_interest = "default"` will load the whole dataset. |
|
|