Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:
dolma / README.md
kylel's picture
v1_7 update
fa8ae12 verified
|
raw
history blame
6.55 kB
metadata
license: odc-by
viewer: true
task_categories:
  - text-generation
language:
  - en
tags:
  - language-modeling
  - casual-lm
  - llm
pretty_name: Dolma
size_categories:
  - n>1T

Dolma

Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background.

Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials.

More information:

  • Read Dolma manuscript and its Data Sheet on ArXiv;
  • Explore the open source tools we created to curate Dolma.
  • Want to request removal of personal data? Use this form to notify us of documents containing PII about a specific user.

To learn more about the toolkit used to create Dolma, including how to replicate this dataset, head over our GitHub project page!

2024-04-17: Dolma v1.7 Release. We have released an updated version of Dolma that we used to train our latest OLMo 7B-v1.7 model.

2024-04-15: License Change. We have updated the license of Dolma to ODC-BY. Please see this blog post for more information.

Versions

At the moment, there are six versions of Dolma available:

Version Default? Release Date Size (gzip) Description
v1_7 2024-04-15 X.X TB Used to train OLMo-7B-v1.7.
v1_6 2024-01-31 5.4 TB An update to v1.5 with some bug-fixes.
v1_6-sample 2024-01-31 16.4 GB A smaller sample of Dolma, with roughly 10 billion tokens. Useful for data exploration.
v1_5 2023-10-31 6.4 TB The version of Dolma used to train OLMo-1B. Roughly 3 trillion tokens.
v1_5-sample 2023-10-31 2.9 TB A sample of roughly 1.9 trillion tokens used to train OLMo-7B
v1 2023-08-18 6.0 TB The first version of Dolma.

Summary Statistics (v1.7)

Source Provenance New? Documents (millions) OLMo tokens (billions) Sample Proportion Cutoff Date Processing
Dolma's CC Common Crawl via Dolma v1.6 Updated 1,195.5 50% Mar 2023 Extracted using the Dolma pipeline; new quality filtering and deduplication steps.
Refined Web Refined Web Yes 456.4 100% Feb 2023
StarCoder StarCoder Yes 263.8 100% May 2023 No further processing
C4 C4 via Dolma v1.6 Updated 138.4 50% Apr 2019 Filtered using the Dolma pipeline; new quality filtering and deduplication steps.
Reddit PushShift API Updated 79.9 100% Mar 2023 Extracted using the Dolma pipeline; new quality filtering and deduplication steps.
Semantic Scholar S2AG/S2ORC/peS2o via Dolma v1.6 No 38.8 57.2 100% Mar 2023 Same as Dolma v1.6
Project Gutenberg Project Gutenberg No 0.056 6.0 100% Mar 2023 Same as Dolma v1.6

Summary Statistics (v1.6)

Source Doc Type UTF-8 bytes (GB) Documents (millions) Unicode words (billions) Llama tokens (billions)
Common Crawl web pages 9,022 3,370 1,775 2,281
The Stack code 1,043 210 260 411
C4 web pages 790 364 153 198
Reddit social media 339 377 72 89
PeS2o STEM papers 268 38.8 50 70
Project Gutenberg books 20.4 0.056 4.0 6.0
Wikipedia, Wikibooks encyclopedic 16.2 6.2 3.7 4.3
Total 11,519 4,367 2,318 3,059

(Size difference between v1_6 and v1_5 is due to different set of metadata included in files: we removed redundant metadata in v1_6.)

Download

The fastest way to download Dolma is to clone this repository and use the files in the url directory. We recommend using wget in parallel mode to download the files. For example:

DATA_DIR="<path_to_your_data_directory>"
PARALLEL_DOWNLOADS="<number_of_parallel_downloads>"
DOLMA_VERSION="<version_of_dolma_to_download>"

git clone https://huggingface.co/datasets/allenai/dolma
mkdir -p "${DATA_DIR}"


cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"

Then, to load this data using HuggingFace's datasets library, you can use the following code:

import os
from datasets import load_dataset

os.environ["DATA_DIR"] = "<path_to_your_data_directory>"
dataset = load_dataset("allenai/dolma", split="train")

Licensing Information

We are releasing this dataset under the terms of ODC-BY. By using this dataset, you are also bound any license agreements and terms of use of the original data sources.

Bibtex

If you use our dataset or tooling, please cite us at:

@article{dolma,
  title = {{Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
  author={
    Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and
    Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and
    Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and
    Nathan Lambert and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and
    Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and
    Emma Strubell and Nishant Subramani and Oyvind Tafjord and Pete Walsh and Luke Zettlemoyer and
    Noah A. Smith and Hannaneh Hajishirzi and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo
  },
  year = {2024},
  journal={arXiv preprint},
}