File size: 6,811 Bytes
b0bc140 833bdcb b0bc140 0e1780e b0bc140 d151300 32c6f12 0e1780e 87968dc 0e1780e b0bc140 c0f707f d151300 28294c7 c019136 28294c7 c019136 28294c7 c019136 28294c7 d151300 a5a3ff8 9270cee bec4434 9270cee 28294c7 9270cee 393fcd6 833bdcb d151300 833bdcb d8e5a6e 833bdcb d8e5a6e 833bdcb 7183e06 833bdcb d151300 833bdcb d151300 a9a7363 d151300 833bdcb d151300 6ded5bf d151300 833bdcb 6ded5bf bec4434 6ded5bf 324ba38 6ded5bf d151300 f692a03 c019136 f692a03 d151300 f692a03 d151300 a721659 d151300 a9a7363 d151300 a9a7363 d151300 ee46875 d151300 a9a7363 b0bc140 a9a7363 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 |
---
dataset_info:
splits:
- name: train
num_examples: 1594197267
download_size: 3.3TB
license: odc-by
pretty_name: Zyda
task_categories:
- text-generation
language:
- en
size_categories:
- n>1T
configs:
- config_name: default
data_files:
- split: train
path: data/*/*/*
- config_name: zyda_no_starcoder
data_files:
- split: train
path: data/zyda_no_starcoder/*/*
- config_name: zyda_arxiv_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_arxiv/*
- config_name: zyda_c4-en_only
data_files:
- split: train
path: data/zyda_no_starcoder/c4_en/*
- config_name: zyda_peS2o_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_peS2o/*
- config_name: zyda_pile-uncopyrighted_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_pile-uncopyrighted/*
- config_name: zyda_refinedweb_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_refinedweb/*
- config_name: zyda_slimpajama_only
data_files:
- split: train
path: data/zyda_no_starcoder/zyda_slimpajama/*
- config_name: zyda_starcoder_only
data_files:
- split: train
path: data/zyda_starcoder/*/*
---
# Zyda
<!-- Provide a quick summary of the dataset. -->
Zyda is a 1.3T language modeling dataset created by collecting open and high quality datasets and combining them and performing a uniform filtering and deduplication step. We find that Zyda performs extremely well in ablations and is at least comparable and potentially better to the best openly available datasets available, due to our meticulous post-processing pipeline. We think the best use of Zyda is either as a standalone dataset for language model training up to the 1T scale, or in combination with Fineweb or Dolma for multi-trillion token training.
An early version of Zyda was used as the primary dataset for phase 1 pretraining of [Zamba](https://arxiv.org/abs/2405.16712), a model which performs strongly on a per-token basis, testifying to the strength of Zyda as a pretraining dataset.
Models trained on Zyda significantly outperform identical models of the Pythia suite trained on the [Pile](https://arxiv.org/abs/2101.00027) for 300B tokens.
Zyda also outperforms Dolma, RefinedWeb, and Fineweb on 1.4B models trained on 50B tokens of each dataset.
According to our evaluations, Zyda is the most performant per-token open dataset available in its non-starcoder variant on language tasks. The Zyda starcoder variant ties with fineweb.
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="650" alt="Zyda performance across steps.">
</center>
These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
## How to download
Full dataset:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", split="train")
```
Full dataset without StarCoder:
```
import datasets
ds = datasets.load_dataset("Zyphra/Zyda", name="zyda_no_starcoder", split="train")
```
For downloading individual components put their name in the name arg of `load_dataset()`:
- zyda_arxiv_only
- zyda_c4-en_only
- zyda_peS2o_only
- zyda_pile-uncopyrighted_only
- zyda_refinedweb_only
- zyda_slimpajama_only
- zyda_starcoder_only
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Zyphra
- **Language(s) (NLP):** Primarily English
- **License:** Open Data Commons License
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Dataset fields:
- `text`: contains actual text for training
- `source`: component the text is coming from
- `filtering_features`: precomputed values of different features that were used for filtering (converted to json string)
- `source_other`: metadata from the source dataset (converted to json string)
### Source Data
Zyda was drawn from seven component open datasets which are well-regarded in the community. These are:
Pile Uncopyrighted: https://huggingface.co/datasets/monology/pile-uncopyrighted
C4-en: https://huggingface.co/datasets/allenai/c4
peS2o: https://huggingface.co/datasets/allenai/peS2o
RefinedWeb: https://huggingface.co/datasets/tiiuae/falcon-refinedweb
SlimPajama: https://huggingface.co/datasets/cerebras/SlimPajama-627B
arxiv_s2orc_parsed: https://huggingface.co/datasets/ArtifactAI/arxiv_s2orc_parsed
StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png" width="650" alt="Composition of Zyda">
</center>
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png) -->
<!-- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/dQV8zNTNCx1xMMT-iupY6.png) -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Zyda was created using a two stage post-processing pipeline consisting of *filtering* and *deduplication*.
For the filtering stage, we utilized a set of hand-crafted and tuned filters derived from a number of sources such as C4, RedPajama, and Gopher, in addition to our own filters.
For the deduplication stage, we used minhash approximate deduplication. We deduplicated on 13-grams and used a minhash signature size of 128 and filtered out documents above a Jaccard similarity of 0.4.
For full details on our data processing, see the [Zyda technical report] (TODO LINK) and our [dataset processing code](https://github.com/Zyphra/Zyda_processing).
#### Personal and Sensitive Information
As a language modelling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters.
## Bias, Risks, and Limitations
As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content.
## Licensing Information
We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound any license agreements and terms of use of the original data sources.
## Citation [optional]
If you use our dataset to train a model, please cite us at:
(-/TODO)
|