imvladikon's picture
Update README.md
e997e89
|
raw
history blame
3.28 kB
metadata
dataset_info:
  features:
    - name: uid
      dtype: string
    - name: file_id
      dtype: string
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: sentence
      dtype: string
    - name: n_segment
      dtype: int32
    - name: duration_ms
      dtype: float32
    - name: language
      dtype: string
    - name: sample_rate
      dtype: int32
    - name: course
      dtype: string
    - name: sentence_length
      dtype: int32
    - name: n_tokens
      dtype: int32
  splits:
    - name: train
      num_bytes: 99661277809.752
      num_examples: 75924
  download_size: 83572532883
  dataset_size: 99661277809.752
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - automatic-speech-recognition
language:
  - he
size_categories:
  - 10K<n<100K

Data Description

Hebrew Speech Recognition dataset from Campus IL.

Data was scraped from the Campus website, which contains video lectures from various courses in Hebrew.
Then subtitles were extracted from the videos and aligned with the audio.
Subtitles that are not on Hebrew were removed (WIP: need to remove non-Hebrew audio as well, e.g. using simple classifier).
Samples with duration less than 3 second were removed.
Total duration of the dataset is 152 hours.
Outliers in terms of the duration/char ratio were not removed, so it's possible to find suspiciously long or short sentences compared to the duration.
WIP: dataset suspiciously is huge. fix it (probably original files with 22050Hz are in). if loading is slow, just clone it :
git clone hebrew_speech_campus && cd hebrew_speech_campus && git lfs pull
and load it from the folder load_dataset("./hebrew_speech_campus")

Data Format

Audio files are in WAV format, 16kHz sampling rate, 16bit, mono. Ignore path field, use audio.array field value.

Data Usage

from datasets import load_dataset

ds = load_dataset("imvladikon/hebrew_speech_campus", split="train", streaming=True)
print(next(iter(ds)))

Data Sample

{'uid': '10c3eda27cf173ab25bde755d0023abed301fcfd',
 'file_id': '10c3eda27cf173ab25bde755d0023abed301fcfd_13',
 'audio': {'path': '/content/hebrew_speech_campus/data/from_another_angle-_mathematics_teaching_practices/10c3eda27cf173ab25bde755d0023abed301fcfd_13.wav',
  'array': array([ 5.54326562e-07,  3.60812592e-05, -2.35188054e-04, ...,
          2.34067178e-04,  1.55649337e-04,  6.32447700e-05]),
  'sampling_rate': 16000},
 'sentence': 'ื”ื“ื•ื‘ืจื™ื ืฆืจื™ื›ื™ื ืœืงื—ืช ืขืœื™ื• ืื—ืจื™ื•ืช, ื•ืœื”ื™ื•ืช ืžื—ื•ื™ื‘ื™ื ืœื• ื›ืœื•ืžืจ, ื”ืฉื™ื— ืฆืจื™ืš ืœื”ื™ื•ืช ืžื—ื•ื™ื‘',
 'n_segment': 13,
 'duration_ms': 6607.98193359375,
 'language': 'he',
 'sample_rate': 16000,
 'course': 'from_another_angle-_mathematics_teaching_practices',
 'sentence_length': 79,
 'n_tokens': 13}

Data Splits and Stats

Split: train
Number of samples: 75924

Citation

Please cite the following if you use this dataset in your work:

@misc{imvladikon2023hebrew_speech_campus,
  author = {Gurevich, Vladimir},
  title = {Hebrew Speech Recognition Dataset: Campus},
  year = {2023},
  howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_campus},
}