Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
Zyda / README.md
yury-zyphra's picture
Update README.md (#1)
b0bc140 verified
|
raw
history blame
1.81 kB
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
pretty_name: Zyda
size_categories:
  - n>1T

Zyda is a unified dataset, released under a permissive license, comprising most of the largest and highest quality existing open-source datasets available. Upon these, we have performed extensive additional filtering, beyond what was originally applied, and thorough intra- and inter-dataset deduplication. Our aim with this work is to create a growing and extendable meta-dataset which can be easily used by practitioners to train trillion-token scale language models while also consolidating and unifying the efforts made by disparate open-source groups who have released their datasets. Ultimately, we hope that our work can provide an 'off-the-shelf' accessible trillion-scale high quality pretraining dataset which can be used by groups interested in pretraining their own LLM models.

While not performing new filtering from common-crawl, we believe that our work is an important and valuable step towards the creation of large scale high quality open datasets given that there exist a number of high quality existing datasets but few to none of them individually reach the scale necessary for training frontier models. Collating, filtering, and deduplicating the existing datasets needed to create a trillion token dataset is nontrivial work and extremely important to raise the quality of the dataset and prevent significant amounts of inter-dataset duplicates. This latter operation of deduplication between datasets is extremely important given the degree of duplicated documents we discovered in the datasets that we collected in this work.

Here we are releasing a version of the dataset that was deduplicated using LSH minhash technique with 40% Jaccard similarity threshold.