KSS_Dataset / README.md
Bingsu's picture
Fix `license` metadata (#1)
48fdfd7
|
raw
history blame
4.45 kB
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - ko
license:
  - cc-by-nc-sa-4.0
multilinguality:
  - monolingual
pretty_name: Korean Single Speaker Speech Dataset
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-to-speech
task_ids: []

Dataset Description

Description of the original author

KSS Dataset: Korean Single speaker Speech Dataset

KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean.

File Format

Each line in transcript.v.1.3.txt is delimited by | into six fields.

  • A. Audio file path
  • B. Original script
  • C. Expanded script
  • D. Decomposed script
  • E. Audio duration (seconds)
  • F. English translation

e.g.,

1/1_0470.wav|์ €๋Š” ๋ณดํ†ต 20๋ถ„ ์ •๋„ ๋‚ฎ์ž ์„ ์žก๋‹ˆ๋‹ค.|์ €๋Š” ๋ณดํ†ต ์ด์‹ญ ๋ถ„ ์ •๋„ ๋‚ฎ์ž ์„ ์žก๋‹ˆ๋‹ค.|แ„Œแ…ฅแ„‚แ…ณแ†ซ แ„‡แ…ฉแ„แ…ฉแ†ผ แ„‹แ…ตแ„‰แ…ตแ†ธ แ„‡แ…ฎแ†ซ แ„Œแ…ฅแ†ผแ„ƒแ…ฉ แ„‚แ…กแ†ฝแ„Œแ…กแ†ทแ„‹แ…ณแ†ฏ แ„Œแ…กแ†ธแ„‚แ…ตแ„ƒแ…ก.|4.1|I usually take a nap for 20 minutes.

Specification

License

NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this.

Citation

If you want to cite KSS Dataset, please refer to this:

Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018

Reference

Check out this for a project using this KSS Dataset.

Contact

You can contact me at kbpark.linguist@gmail.com.

April, 2018.

Kyubyong Park

Dataset Summary

12,853 Korean audio files with transcription.

Supported Tasks and Leaderboards

text-to-speech

Languages

korean

Dataset Structure

Data Instances

>>> from datasets import load_dataset

>>> dataset = load_dataset("Bingsu/KSS_Dataset")
>>> dataset["train"].features
{'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None),
 'original_script': Value(dtype='string', id=None),
 'expanded_script': Value(dtype='string', id=None),
 'decomposed_script': Value(dtype='string', id=None),
 'duration': Value(dtype='float32', id=None),
 'english_translation': Value(dtype='string', id=None)}
>>> dataset["train"][0]
{'audio': {'path': None,
  'array': array([ 0.00000000e+00,  3.05175781e-05, -4.57763672e-05, ...,
          0.00000000e+00, -3.05175781e-05, -3.05175781e-05]),
  'sampling_rate': 44100},
 'original_script': '๊ทธ๋Š” ๊ดœ์ฐฎ์€ ์ฒ™ํ•˜๋ ค๊ณ  ์• ์“ฐ๋Š” ๊ฒƒ ๊ฐ™์•˜๋‹ค.',
 'expanded_script': '๊ทธ๋Š” ๊ดœ์ฐฎ์€ ์ฒ™ํ•˜๋ ค๊ณ  ์• ์“ฐ๋Š” ๊ฒƒ ๊ฐ™์•˜๋‹ค.',
 'decomposed_script': 'แ„€แ…ณแ„‚แ…ณแ†ซ แ„€แ…ซแ†ซแ„Žแ…กแ†ญแ„‹แ…ณแ†ซ แ„Žแ…ฅแ†จแ„’แ…กแ„…แ…งแ„€แ…ฉ แ„‹แ…ขแ„Šแ…ณแ„‚แ…ณแ†ซ แ„€แ…ฅแ†บ แ„€แ…กแ‡€แ„‹แ…กแ†ปแ„ƒแ…ก.',
 'duration': 3.5,
 'english_translation': 'He seemed to be pretending to be okay.'}

Data Splits

train
# of examples 12853