license: cc-by-4.0
size_categories:
- 1M<n<10M
configs:
- config_name: pre
data_files:
- pre/*/*.arrow
- config_name: raw
data_files:
- raw/*/*.arrow
- config_name: GK
data_files:
- pre/geekonomy/*.arrow
- raw/geekonomy/*.arrow
- config_name: GK_pre
data_files: pre/geekonomy/*.arrow
- config_name: GK_raw
data_files: raw/geekonomy/*.arrow
- config_name: OH
data_files:
- pre/osim-history/*.arrow
- raw/osim-history/*.arrow
- config_name: OH_pre
data_files: pre/osim-history/*.arrow
- config_name: OH_raw
data_files: raw/osim-history/*.arrow
- config_name: DK
data_files:
- pre/dor/*.arrow
- raw/dor/*.arrow
- config_name: DK_pre
data_files: pre/dor/*.arrow
- config_name: DK_raw
data_files: raw/dor/*.arrow
- config_name: YO
data_files:
- pre/Yo_the_podcast/*.arrow
- raw/Yo_the_podcast/*.arrow
- config_name: YO_pre
data_files: pre/Yo_the_podcast/*.arrow
- config_name: YO_raw
data_files: raw/Yo_the_podcast/*.arrow
- config_name: YV
data_files:
- pre/Yad_vashem/*.arrow
- raw/Yad_vashem/*.arrow
- config_name: YV_pre
data_files: pre/Yad_vashem/*.arrow
- config_name: YV_raw
data_files: raw/Yad_vashem/*.arrow
HebDB
Paper: http://arxiv.org/abs/2407.07566
If you use our datasets, please use the following:
@article{turetzky2024hebdb,
title={HebDB: a Weakly Supervised Dataset for Hebrew Speech Processing},
author={Turetzky, Arnon and Tal, Or and Segal-Feldman, Yael and Dissen, Yehoshua and Zeldes, Ella and Roth, Amit and Cohen, Eyal and Shrem, Yosi and Chernyak, Bronya R and Seleznova, Olga and others},
journal={arXiv preprint arXiv:2407.07566},
year={2024}
}
Dataset Summary
A weakly supervised dataset for spoken language processing in the Hebrew language. HEBDB offers roughly 2500 hours of natural and spontaneous speech recordings in the Hebrew language, consisting of a large variety of speakers and topics. We provide raw recordings together with a pre-processed, weakly supervised, and filtered version. The goal of HEBDB is to further enhance research and development of spoken language processing tools for the Hebrew language.
Data variants are: pre
, raw
. Note variants share the same columns to ease the usage of dataset subsets but raw
only use the columns: fname
, audio
and is_raw
.
How do I download this?
Using 🤗 Datasets
from datasets import load_dataset
# pre only
hebdb_pre = load_dataset("SLPRL-HUJI/HebDB", "pre")
# raw only
hebdb_raw = load_dataset("SLPRL-HUJI/HebDB", "raw")
# One specific source(see code list below), both raw and pre
geekonomy = load_dataset("SLPRL-HUJI/HebDB", "GK")
# One specific source, both raw and pre
geekonomy_pre = load_dataset("SLPRL-HUJI/HebDB", "GK_pre")
To avoid downloading the entire dataset you can load it in streaming mode using streaming=True
, for example:
hebdb_pre = load_dataset("SLPRL-HUJI/HebDB", "pre", streaming=True)
You can also load and mix:
from datasets import concatenate_datasets, load_dataset
geekonomy = load_dataset("SLPRL-HUJI/HebDB", "GK_pre")
osim_history = load_dataset("SLPRL-HUJI/HebDB", "OH_pre")
# Concatenate both datasets
concatenated = concatenate_datasets([geekonomy, osim_history])
Sources
The 6 available sources are reported in the table below.
code | name |
---|---|
GK | Geekonomy |
OH | Osim History |
DK | The Dor Kahn Experience |
YO | Yo! The podcast |
GQ | Good Question |
YV | Yad vashem |
Data Fields
The data have several fields:
fname
: file nameaudio
:array
: array of audio samplessample_rate
: audio sampling ratepath
: path to the audio file saved location
is_raw
: Flag for raw/preprocessedraw
:fname
: origin raw file namestart_sec
: start time mark in secondsend_sec
: end time mark in seconds
source
: Source namen_samples
: Number of samplestext
: Transcriptionnormalized_text
: Normalized transcription (details in paper)score
: Transcription quality score obtained by forced aligner (details in paper)
Licensing Information
Data is licensed under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), The full text of the CC-BY 4.0 license is available at https://creativecommons.org/licenses/by/4.0/.
Acknowledgements
This research work was supported by the Israel Innovation Authority, grant number 78563.