audio
audioduration (s)
2.8
241
label
class label
0 classes
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection

We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper

Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection

Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser
University of Zurich and ETH Zurich

This is the Bengalese finch dataset customized for Animal Voice Activity Detection (vocal segmentation) in WhisperSeg.

Download Dataset

from huggingface_hub import snapshot_download
snapshot_download('nccratliri/vad-bengalese-finch', local_dir = "data/bengalese-finch", repo_type="dataset" )

For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg

When using this dataset, please also cite:

@article {10.7554/eLife.68837,
article_type = {journal},
title = {Fast and accurate annotation of acoustic signals with deep neural networks},
author = {Steinfath, Elsa and Palacios-Muñoz, Adrian and Rottschäfer, Julian R and Yuezak, Deniz and Clemens, Jan},
editor = {Calabrese, Ronald L and Egnor, SE Roian and Troyer, Todd},
volume = 10,
year = 2021,
month = {nov},
pub_date = {2021-11-01},
pages = {e68837},
citation = {eLife 2021;10:e68837},
doi = {10.7554/eLife.68837},
url = {https://doi.org/10.7554/eLife.68837},
abstract = {Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast. We here introduce \textit{DeepAudioSegmenter} (\textit{DAS)}, a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of \textit{DAS} using acoustic signals with diverse characteristics from insects, birds, and mammals. \textit{DAS} comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. \textit{DAS} annotates song with high throughput and low latency for experimental interventions in realtime. Overall, \textit{DAS} is a universal, versatile, and accessible tool for annotating acoustic communication signals.},
keywords = {acoustic communication, annotation, song, deep learning, bird, fly},
journal = {eLife},
issn = {2050-084X},
publisher = {eLife Sciences Publications, Ltd},
}
@article {Gu2023.09.30.560270,
    author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
    title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
    elocation-id = {2023.09.30.560270},
    year = {2023},
    doi = {10.1101/2023.09.30.560270},
    publisher = {Cold Spring Harbor Laboratory},
    abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
    URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
    eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
    journal = {bioRxiv}
}

Contact

nianlong.gu@uzh.ch

Downloads last month
4
Edit dataset card