ESLTTS / README.md
MushanW's picture
Update README.md
951f8cd verified
|
raw
history blame
3.35 kB
metadata
language:
  - en
license: cc0-1.0
task_categories:
  - text-to-audio
  - automatic-speech-recognition
  - audio-to-audio
  - audio-classification
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: speaker_id
      dtype: string
    - name: transcript
      dtype: string
    - name: native_language
      dtype: string
    - name: subset
      dtype: string
  splits:
    - name: all
      num_bytes: 3179636720.056
      num_examples: 41806
  download_size: 3667597693
  dataset_size: 3179636720.056
configs:
  - config_name: default
    data_files:
      - split: all
        path: data/all-*

ESLTTS

The full paper can be accessed here: arXiv, IEEE Xplore.

Dataset Access

You can access this dataset through Huggingface or Google Driver or IEEE Dataport.

Abstract

With the progress made in speaker-adaptive TTS approaches, advanced approaches have shown a remarkable capacity to reproduce the speaker’s voice in the commonly used TTS datasets. However, mimicking voices characterized by substantial accents, such as non-native English speakers, is still challenging. Regrettably, the absence of a dedicated TTS dataset for speakers with substantial accents inhibits the research and evaluation of speaker-adaptive TTS models under such conditions. To address this gap, we developed a corpus of non-native speakers' English utterances.

We named this corpus “English as a Second Language TTS dataset ” (ESLTTS). The ESLTTS dataset consists of roughly 37 hours of 42,000 utterances from 134 non-native English speakers. These speakers represent a diversity of linguistic backgrounds spanning 31 native languages. For each speaker, the dataset includes an adaptation set lasting about 5 minutes for speaker adaptation, a test set comprising 10 utterances for speaker-adaptive TTS evaluation, and a development set for further research.

Dataset Structure

ESLTTS Dataset/
├─ Malayalam_3/     ------------ {Speaker Native Language}_{Speaker id}
│  ├─ ada_1.flac    ------------ {Subset Name}_{Utterance id}
│  ├─ ada_1.txt     ------------ Transcription for "ada_1.flac"
│  ├─ test_1.flac   ------------ {Subset Name}_{Utterance id}
│  ├─ test_1.txt    ------------ Transcription for "test_1.flac"
│  ├─ dev_1.flac    ------------ {Subset Name}_{Utterance id}
│  ├─ dev_1.txt     ------------ Transcription for "dev_1.flac"
│  ├─ ...
├─ Arabic_3/        ------------ {Speaker Native Language}_{Speaker id}
│  ├─ ada_1.flac    ------------ {Subset Name}_{Utterance id}
│  ├─ ...
├─ ...

Citation

@article{wang2024usat,
  author       = {Wenbin Wang and
                  Yang Song and
                  Sanjay K. Jha},
  title        = {{USAT:} {A} Universal Speaker-Adaptive Text-to-Speech Approach},
  journal      = {{IEEE} {ACM} Trans. Audio Speech Lang. Process.},
  volume       = {32},
  pages        = {2590--2604},
  year         = {2024},
  url          = {https://doi.org/10.1109/TASLP.2024.3393714},
  doi          = {10.1109/TASLP.2024.3393714},
}