dataset_info:
splits:
- name: train
num_examples: 1594197267
download_size: 3.3TB
license: odc-by
pretty_name: Zyda
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
- config_name: default
data_files:
- split: train
path: data/*/*/*
- config_name: zyda_no_starcoder
data_files:
- split: train
path: data/zyda_no_starcoder/*/*
- config_name: zyda_arxiv_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_arxiv/*
- config_name: zyda_c4-en_only
data_files:
- split: train
path: data/zyda_no_starcoder/c4_en/*
- config_name: zyda_peS2o_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_peS2o/*
- config_name: zyda_pile-uncopyrighted_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/*
- config_name: zyda_refinedweb_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_refinedweb/*
- config_name: zyda_slimpajama_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_slimpajama/*
- config_name: zyda_starcoder_only
data_files:
- split: train
path: data/zyda_starcoder/*/*
Zyda
Zyda is a 1.3T language modelling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
Zyda is the primary dataset used in phase 1 pretraining of Zamba, a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a dataset.
Models trained on Zyda significantly outperform models of the Pythia suite trained on the pile on parameter-matched models across 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks and tying with fineweb otherwise.
These results are an aggregate scores of classic language modelling evaluations (piqa, winogrande, openbookqa, arc-easy, arc-challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
How to download
Full dataset:
datasets.load_dataset("Zyphra/Zyda", split="train")
Full dataset without StarCoder:
datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
For downloading individual components put their name in the name arg of load_dataset()
:
zyda_arxiv_only
zyda_c4-en_only
zyda_peS2o_only
zyda_pile-uncopyrighted_only
zyda_refinedweb_only
zyda_slimpajama_only
zyda_starcoder_only
Dataset Description
- Curated by: Zyphra
- Language(s) (NLP): Primarily English
- License: Open Data Commons License
Dataset Structure
Dataset fields:
text
: contains actual text for trainingsource
: component the text is coming fromfiltering_features
: precomputed values of different features that were used for filtering (converted to json string)source_other
: metadata from the source dataset (converted to json string)
Source Data
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
C4-en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
Data Collection and Processing
Zyda was created using a two stage post-processing pipeline consisting of filtering and deduplication.
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing see the technical report.
Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
Citation [optional]
If you use our dataset to train a model, please cite us at:
(-/TODO)