SUMM-RE / README.md
juliehunter's picture
Update README.md
47cb4ec verified
metadata
license: cc-by-sa-4.0
dataset_info:
  features:
    - name: meeting_id
      dtype: string
    - name: speaker_id
      dtype: string
    - name: audio_id
      dtype: string
    - name: audio
      dtype: audio
    - name: segments
      list:
        - name: end
          dtype: float64
        - name: start
          dtype: float64
        - name: transcript
          dtype: string
        - name: words
          list:
            - name: end
              dtype: float64
            - name: start
              dtype: float64
            - name: word
              dtype: string
    - name: transcript
      dtype: string
  splits:
    - name: dev
      num_bytes: 14155765669
      num_examples: 130
    - name: train
      num_bytes: 74754662936
      num_examples: 684
    - name: test
      num_bytes: 13775584735
      num_examples: 124
  download_size: 120234623488
  dataset_size: 102802035597
configs:
  - config_name: default
    data_files:
      - split: dev
        path: data/dev/*
      - split: test
        path: data/test/*
      - split: train
        path: data/train/*
  - config_name: example
    data_files:
      - split: train
        path: data/example/*
task_categories:
  - automatic-speech-recognition
  - voice-activity-detection
language:
  - fr

Note: if the data viewer is not working, use the "example" subset.

SUMM-RE

The SUMM-RE dataset is a collection of transcripts of French conversations, aligned with the audio signal.

It is a corpus of meeting-style conversations in French created for the purpose of the SUMM-RE project (ANR-20-CE23-0017).

The full dataset is described in Hunter et al. (2024): "SUMM-RE: A corpus of French meeting-style conversations".

  • Created by: Recording and manual correction of the corpus was carried out by the Language and Speech Lab (LPL) at the University of Aix-Marseille, France.
  • Funded by: The National Research Agency of France (ANR) for the SUMM-RE project (ANR-20-CE23-0017).
  • Shared by: LINAGORA (coordinator of the SUMM-RE project)
  • Language: French
  • License: CC BY-SA 4.0

Dataset Description

Data from the dev and test splits have been manually transcribed and aligned.

Data from the train split has been automatically transcribed and aligned with the Whisper pipeline described in Yamasaki et al. (2023): "Transcribing And Aligning Conversational Speech: A Hybrid Pipeline Applied To French Conversations". The audio and transcripts used to evaluate this pipeline, a subset of the dev split(*), can be found on Ortolang.

The dev and test splits of SUMM-RE can be used for the evaluation of automatic speech recognition models and voice activity detection for conversational, spoken French. Speaker diarization can also be evaluated if several tracks of a same meeting are merged together. SUMM-RE transcripts can be used for the training of language models.

Each conversation lasts roughly 20 minutes. The number of conversations contained in each split is as follows:

  • train: 210 (x ~20 minutes = ~67 hours)
  • dev: 36 (x ~20 minutes = ~12 hours)
  • test: 37 (x ~20 minutes = ~12 hours)

Each conversation contains 3-4 speakers (and in rare cases, 2) and each participant has an individual microphone and associated audio track, giving rise to the following number of tracks for each split:

  • train: 684 (x ~20 minutes = ~226 hours)
  • dev: 130 (x ~20 minutes = ~43 hours)
  • test: 124 (x ~20 minutes = ~41 hours)

Dataset Structure

To visualize an example from the corpus, select the "example" split in the Dataset Viewer.

The corpus contains the following information for each audio track:

  • meeting_id, e.g. 001a_PARL, includes:
    • experiment number, e.g. 001
    • meeting order: a|b|c (there were three meetings per experiment)
    • experiment type: E (experiment) | P (pilot experiment)
    • scenario/topic: A|B|C|D|E
    • meeting type: R (reporting) | D (decision) | P (planning)
    • recording location: L (LPL) | H (H2C2 studio) | Z (Zoom) | D (at home)
  • speaker_id
  • audio_id: meeting_id + speaker_id
  • audio: the audio track for an individual speaker
  • segments: a list of dictionaries where each entry provides the transcription of a segment with timestamps for the segment and each word that it contains. An example is:
[
  {
    "start": 0.5,
    "end": 1.2,
    "transcript": "bonjour toi",
    "words": [
      {
        "word": "bonjour",
        "start": 0.5,
        "end": 0.9
      }
      {
        "word": "toi",
        "start": 0.9,
        "end": 1.2
      }
    ]
  },
  ...
 ]
  • transcript: a string formed by concatenating the text from all of the segments (note that those transcripts implicitly include periods of silence where other speakers are speaking in other audio tracks)

Example Use

To load the full dataset

import datasets

ds = datasets.load_dataset("linagora/SUMM-RE")

Use the streaming option to avoid downloading the full dataset, when only a split is required:

import datasets

devset = datasets.load_dataset("linagora/SUMM-RE", split="dev", streaming=True)

for sample in devset:
   ...

Load some short extracts of the data to explore the structure:


import datasets

ds = datasets.load_dataset("linagora/SUMM-RE", "example")

sample = ds["train"][0]
print(sample)

Dataset Creation

Curation Rationale

The full SUMM-RE corpus, which includes meeting summaries, is designed to train and evaluate models for meeting summarization. This version is an extract of the full corpus used to evaluate various stages of the summarization pipeline, starting with automatic transcription of the audio signal.

Source Data

The SUMM-RE corpus is an original corpus designed by members of LINAGORA and the University of Aix-Marseille and recorded by the latter.

Data Collection and Processing

For details, see Hunter et al. (2024).

Audio Sampling Rates

By default, files recorded through Zoom have a sampling rate of 32000 and other files have a sampling rate of 48000. The sampling rates for exception files are as follows:

44100 = ['071*']

32000 = ['101*']

22050 = ['018a_EARZ_055.wav', '018a_EARZ_056.wav', '018a_EARZ_057.wav', '018a_EARZ_058.wav', '020b_EBDZ_017.wav', '020b_EBDZ_053.wav', '020b_EBDZ_057.wav', '020b_EBDZ_063.wav', '027a_EBRH_025.wav', '027a_EBRH_075.wav', '027a_EBRH_078.wav', '032b_EADH_084.wav', '032b_EADH_085.wav', '032b_EADH_086.wav', '032b_EADH_087.wav', '033a_EBRH_091.wav', '033c_EBPH_092.wav', '033c_EBPH_093.wav', '033c_EBPH_094.wav', '034a_EBRH_095.wav', '034a_EBRH_096.wav', '034a_EBRH_097.wav', '034a_EBRH_098.wav', '035b_EADH_088.wav', '035b_EADH_096.wav', '035b_EADH_097.wav', '035b_EADH_098.wav', '036c_EAPH_091.wav', '036c_EAPH_092.wav', '036c_EAPH_093.wav', '036c_EAPH_099.wav', '069c_EEPL_156.wav', '069c_EEPL_157.wav', '069c_EEPL_158.wav', '069c_EEPL_159.wav']

Who are the source data producers?

Corpus design and production:

  • University of Aix-Marseille: Océane Granier (corpus conception, recording, annotation), Laurent Prévot (corpus conception, annotatation, supervision), Hiroyoshi Yamasaki (corpus cleaning, alignment and anonymization), Roxanne Bertrand (corpus conception and annotation) with helpful input from Brigitte Bigi and Stéphane Rauzy.

  • LINAGORA: Julie Hunter, Kate Thompson and Guokan Shang (corpus conception)

Corpus participants:

  • Participants for the in-person conversations were recruited on the University of Aix-Marseille campus.
  • Participants for the Zoom meetings were recruited through Prolific.

Annotations

Transcripts are not punctuated and all words are in lower case.

Annotations follow the conventions laid out in chapter 3 of The SPPAS Book by Brigitte Bigi. Transcripts may therefore contain additional annotations in the following contexts:

  • truncated words, noted as a - at the end of the token string (an ex- example);
  • noises, noted by a * (not available for some languages);
  • laughter, noted by a @ (not available for some languages);
  • short pauses, noted by a +;
  • elisions, mentioned in parentheses;
  • specific pronunciations, noted with brackets [example,eczap];
  • comments are preferably noted inside braces {this is a comment!};
  • comments can be noted inside brackets without using comma [this and this];
  • liaisons, noted between = (this =n= example);
  • morphological variants with <ice scream,I scream>,
  • proper name annotation, like $ John S. Doe $.

Note that the symbols * + @ must be surrounded by whitespace.

Annotation process

[More Information Needed]

Who are the annotators?

Principal annotator for dev: Océane Granier

Principal annotators for test: Eliane Bailly, Manon Méaume, Lyne Rahabi, Lucille Rico

Additional assistance from: Laurent Prévot, Hiroyoshi Yamasaki and Roxane Bertrand

Personal and Sensitive Information

A portion of the devsplit has been (semi-automatically) anonymized for the pipeline described in Yamasaki et al. (2023).

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Citations

Please cite the papers below if using the dataset in your work.

Description of the full dataset:

Julie Hunter, Hiroyoshi Yamasaki, Océane Granier, Jérôme Louradour, Roxane Bertrand, Kate Thompson and Laurent Prévot (2024): "SUMM-RE: A corpus of French meeting-style conversations," TALN 2024.

@inproceedings{hunter2024summre,
  title={SUMM-RE: A corpus of French meeting-style conversations},
  author={Hunter, Julie and Yamasaki, Hiroyoshi and Granier, Oc{\'e}ane and Louradour, J{\'e}r{\^o}me and Bertrand, Roxane and Thompson, Kate and Pr{\'e}vot, Laurent},
  booktitle={Actes de JEP-TALN-RECITAL 2024. 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position},
  pages={508--529},
  year={2024},
  organization={ATALA \& AFPC}
}

The Whisper Pipeline:

Hiroyoshi Yamasaki, Jérôme Louradour, Julie Hunter and Laurent Prévot (2023): "Transcribing and aligning conversational speech: A hybrid pipeline applied to French conversations," Workshop on Automatic Speech Recognition and Understanding (ASRU).

@inproceedings{yamasaki2023transcribing,
  title={Transcribing And Aligning Conversational Speech: A Hybrid Pipeline Applied To French Conversations},
  author={Yamasaki, Hiroyoshi and Louradour, J{\'e}r{\^o}me and Hunter, Julie and Pr{\'e}vot, Laurent},
  booktitle={2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
  pages={1--6},
  year={2023},
  organization={IEEE}
}

(*)The following meetings were used to evaluate the pipeline in Yamasaki et al. (2023):

asru = ['018a_EARZ_055', '018a_EARZ_056', '018a_EARZ_057', '018a_EARZ_058', '020b_EBDZ_017', '020b_EBDZ_053', '020b_EBDZ_057', '020b_EBDZ_063', '027a_EBRH_025', '027a_EBRH_075', '027a_EBRH_078', '032b_EADH_084', '032b_EADH_085', '032b_EADH_086', '032b_EADH_087', '033a_EBRH_091', '033a_EBRH_092', '033a_EBRH_093', '033a_EBRH_094', '033c_EBPH_091', '033c_EBPH_092', '033c_EBPH_093', '033c_EBPH_094', '034a_EBRH_095', '034a_EBRH_096', '034a_EBRH_097', '034a_EBRH_098', '035b_EADH_088', '035b_EADH_096', '035b_EADH_097', '035b_EADH_098', '036c_EAPH_091', '036c_EAPH_092', '036c_EAPH_093', '036c_EAPH_099', '069c_EEPL_156', '069c_EEPL_157', '069c_EEPL_158', '069c_EEPL_159']

Acknowledgements

We gratefully acknowledge support from the Agence Nationale de Recherche for the SUMM-RE project (ANR-20-CE23-0017).