IsmailH's picture
Update README.md
19df2a8 verified
|
raw
history blame
20.7 kB
metadata
pretty_name: Claire English Dialogue Dataset (CEDD)
license: cc-by-nc-sa-4.0
language:
  - en
multilinguality:
  - monolingual
size_categories:
  - 100M<n<1B
task_categories:
  - text-generation
  - text2text-generation
task_ids:
  - language-modeling
  - dialogue-modeling
  - dialogue-generation
tags:
  - conversational
  - text-generation
  - conditional-text-generation
  - dialogue-modeling
  - dialogue-generation
viewer: true
configs:
  - config_name: default
    sample_by: paragraph
    data_files:
      - split: train
        path: EN/*/train.txt
      - split: test
        path: EN/*/test.txt

Claire English Dialogue Dataset (CEDD)
A collection of English dialogue transcripts

This is the first packaged version of the datasets used to train the english variants of the Claire family of large language models (OpenLLM-France/Claire-7B-EN-0.1).

The Claire English Dialogue Dataset (CEDD) is a collection of transcripts of English dialogues from various sources, including parliamentary proceedings, interviews, broadcast, meetings, and free conversations. Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker, or a unique identifier if the speaker is unknown.

Dataset composition

CEDD can be broken down into:

  • 962550 conversations in total (812705 in train, 11992 in test)
  • 20863917 speech turns in total (18576327 in train, 359527 in test)
  • around 864M words

It is a collection of several independent datasets, classified by the types of conversations they contain. This categorization is designed to more evenly balance the influence of different styles of dialogue on model training and to facilitate future applications of CEDD for which certain types of dialogue might be more helpful than others.

For more information, you can look at the following documents:

  • EN/metadata.csv contains further statistics on the different subcorpora (broken down by train/test splits).

Data sources

Dataset Description Words Turns Conversations License (and conditions)

Parliamentary Proceedings

Europarl The Europarl parallel corpus 56M - 11K No copyright restrictions. If you use this data in your research, please contact phi@jhu.edu

Spoken Dialogue

Charlotte Narratives The Charlotte Narrative and Conversation Collection (CNCC) contains 95 narratives, conversations and interviews representative of the residents of Mecklenburg County, North Carolina and surrounding North Carolina communities. 200K - 93 Available for download and use for research and development, including commercial development.
Switchboard The corpus consists of approximately 260 hours of speech and was originally collected by Texas Instruments in 1990-1, under DARPA sponsorship. 3M - 2320 LDC User Agreement for Non-Members

Broadcast

MediaSum MediaSum dataset for summarization 720M - 458K For research purposes only

Meetings

AMI The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. 712K - <1K CC BY 4.0
ICSI About 70 hours of meeting recordings. 804K - <1K CC BY 4.0

Assistance

ReDial ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other. 1.6M - - CC BY 4.0
OpenDialKG OpenDialKG is a dataset of conversations between two crowdsourcing agents engaging in a dialog about a given topic. 1M - - CC-BY-NC-4.0
ABCD Action-Based Conversations Dataset. 1.5M - - MIT
AirDialogue AirDialogue is a benchmark dataset for goal-oriented dialogue generation research. 37M - - Apache License 2.0
MULTIWOZ2_2 Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. 1.9M - - Apache License 2.0
MulDoGO Conversations from the airline, fastfood, finance, insurance, media, and software domains. 10M - - CDLA Permissive License

Free Chat

Chit-Chat Open-domain conversational dataset from the BYU Perception, Control & Cognition lab's Chit-Chat Challenge. 2.3M 7.1K 258K MIT License
DailyDialog High-quality multi-turn dialog dataset. 1.2M - 13K CC BY-NC-SA 4.0

Misc

British National Corpus (BNC) Collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of British English, both spoken and written, from the late twentieth century. 110M - 1K BCN License

Example use (python)

In the following sample_by="paragraph" is important to ensure that each sample corresponds to a full conversation (not just a speech turn).

Load dataset from HuggingFace cache (downloaded under ~/.cache/huggingface/datasets):

from datasets import load_dataset

dataset = load_dataset("OpenLLM-France/Claire-Dialogue-English-0.1", sample_by="paragraph", streaming=True)

Load dataset from raw text files:

from datasets import load_dataset
import glob

path = "path/to/dataset"
train_files = glob.glob(path + "/*/train.txt")
test_files = glob.glob(path + "/*/test.txt")

dataset = load_dataset("text", data_files={"train": train_files, "test": test_files}, sample_by="paragraph", streaming=True)

Iterate on the dataset:

for sample in dataset["train"]:
    train_conversation = sample["text"]
    ...

for sample in dataset["test"]:
    test_conversation = sample["text"]
    ...

Important notes

All datasets were normalized in text files so that:

  • Conversations are separated by a single blank line.
  • Each line corresponds to a single speech turn.
  • Each line begins with a speaker label of the form "[***:]".
  • When speaker names are anonymized or otherwise unknown, speakers are distinguished by numbers in the following format: "[speaker001:]", "[speaker002:]", …
    Otherwise, speakers are labeled with their names or roles, e.g. "[Paul:]", "[François Mitterrand:]", "[M. le président:]".
  • There are no parentheses: special annotations are always between square brackets.
  • Commong tags include:
    • "[PII]": Personally Identifiable Information (anonymized name...)
    • "[NOISE]": distinct ambient noises
    • "[LAUGHTER]": laughter
  • Depending on the data source, data may or may not include punctuation marks and upper case letters.
  • The data were normalized in various ways including unicode NFC normalization, conversion of unbreakable spaces to spaces, and standardization of punctuation marks ( -> ..., «/»//// -> ").

License

Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses, Claire-Dialogue-English-0.1 is made available under the CC-BY-NC-SA 4.0 license.

Citations

When using the CEDD corpus, please cite this page:

@misc{openllm2024claire_en,
  author = {Julie Hunter and Jérôme Louradour and Virgile Rennard and Ismaïl Harrando and Guokan Shang and Jean-Pierre Lorré},
  title = {The Claire English Dialogue Dataset},
  year = {2024},
  publisher = {HuggingFace},
  journal = {HuggingFace},
  howpublished = {\url{https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1}},
}

You should also provide citations for all of the original corpora. They are listed below.

Contact

contact@openllm-france.fr