|
--- |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
|
|
This dataset contains precomputed audio features designed for use with the [openWakeWord library](https://github.com/dscripka/openWakeWord). |
|
Specifically, they are intended to be used as general purpose negative data (that is, data that does *not* contain the target wake word/phrase) for training custom openWakeWord models. |
|
|
|
The individual .npy files in this dataset are not original audio data, but rather are low dimensional audio features produced by a pre-trained [speech embedding model from Google](https://tfhub.dev/google/speech_embedding/1). |
|
openWakeWord uses these features as inputs to custom word/phrase detection models. |
|
|
|
The dataset currently contains precomputed features from the following datasets. |
|
|
|
## ACAV100M |
|
|
|
The ACAV100M dataset contains a highly diverse set of audio data with multilingual speech, noise, music, all captured in real-world environments. |
|
This is a highly effective dataset for training custom openwakeword models. |
|
|
|
**Dataset source**: https://acav100m.github.io/ |
|
|
|
**Size**: An array of shape (5625000, 16, 96), corresponding to ~2000 hours of audio from the ACAV100M dataset. Each row in the array has a temporal dimension of 16, which at 80 ms per temporal step results in each row containing features representing 1.28 seconds of audio. |
|
|
|
|
|
## False-Positive Validation Set |
|
|
|
This is a hand-selected combination of audio features (representing ~11 hours of total audio) that serves as a false-positive validation set when training custom openWakeWord models. |
|
It is intended to be broadly representative of the different types of environments where openWakeWord models could be deployed, and thus useful for estimating false-positive rates. |
|
|
|
The contributing audio datasets are: |
|
|
|
1) The entire [DiPCo](https://www.amazon.science/publications/dipco-dinner-party-corpus) dataset (~5.3 hours) |
|
2) Selected clips from the [Santa Barbara Corpus of Spoken American English](https://www.linguistics.ucsb.edu/research/santa-barbara-corpus) (~3.7 hours) |
|
3) Selected clips from the [MUSDB Music Dataset](https://sigsep.github.io/datasets/musdb.html) (2 hours) |
|
|
|
Note that the MUSDB audio data was first reverberated with the [MIT impulse response recordings](https://huggingface.co/datasets/davidscripka/MIT_environmental_impulse_responses) to make it more representative of real-world deployments. |
|
|