Datasets:

Modalities:
Audio
License:
nianlong commited on
Commit
cab65ba
1 Parent(s): 6fb13d6

Delete .ipynb_checkpoints

Browse files
.ipynb_checkpoints/README-checkpoint.md DELETED
@@ -1,41 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- # Positive Transfer Of The Whisper Speech Transformer To Human And Animal Voice Activity Detection
5
- We proposed WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for both human and animal Voice Activity Detection (VAD). For more details, please refer to our paper
6
-
7
- >
8
- > [**Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection**](https://doi.org/10.1101/2023.09.30.560270)
9
- >
10
- > Nianlong Gu, Kanghwi Lee, Maris Basha, Sumit Kumar Ram, Guanghao You, Richard H. R. Hahnloser <br>
11
- > University of Zurich and ETH Zurich
12
-
13
- This animals dataset was customized Animal Voice Activity Detection (vocal segmentation) when training the WhisperSeg segmenter.
14
-
15
- ## Download Dataset
16
- ```python
17
- from huggingface_hub import snapshot_download
18
- snapshot_download('nccratliri/vad-animals', local_dir = "data/vad-animals", repo_type="dataset" )
19
- ```
20
-
21
- For more usage details, please refer to the GitHub repository: https://github.com/nianlonggu/WhisperSeg
22
-
23
- ## Citation
24
- When using this dataset for your work, please cite:
25
- ```
26
- @article {Gu2023.09.30.560270,
27
- author = {Nianlong Gu and Kanghwi Lee and Maris Basha and Sumit Kumar Ram and Guanghao You and Richard Hahnloser},
28
- title = {Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection},
29
- elocation-id = {2023.09.30.560270},
30
- year = {2023},
31
- doi = {10.1101/2023.09.30.560270},
32
- publisher = {Cold Spring Harbor Laboratory},
33
- abstract = {This paper introduces WhisperSeg, utilizing the Whisper Transformer pre-trained for Automatic Speech Recognition (ASR) for human and animal Voice Activity Detection (VAD). Contrary to traditional methods that detect human voice or animal vocalizations from a short audio frame and rely on careful threshold selection, WhisperSeg processes entire spectrograms of long audio and generates plain text representations of onset, offset, and type of voice activity. Processing a longer audio context with a larger network greatly improves detection accuracy from few labeled examples. We further demonstrate a positive transfer of detection performance to new animal species, making our approach viable in the data-scarce multi-species setting.Competing Interest StatementThe authors have declared no competing interest.},
34
- URL = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270},
35
- eprint = {https://www.biorxiv.org/content/early/2023/10/02/2023.09.30.560270.full.pdf},
36
- journal = {bioRxiv}
37
- }
38
- ```
39
-
40
- ## Contact
41
- nianlong.gu@uzh.ch
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.ipynb_checkpoints/process_data-checkpoint.ipynb DELETED
@@ -1,152 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "id": "841f9abb-bac7-421a-be91-20cd0e66b565",
6
- "metadata": {},
7
- "source": [
8
- "## Download dataset"
9
- ]
10
- },
11
- {
12
- "cell_type": "code",
13
- "execution_count": 8,
14
- "id": "c686f678-f6be-4227-b3bb-4dc0974d8377",
15
- "metadata": {},
16
- "outputs": [
17
- {
18
- "data": {
19
- "application/vnd.jupyter.widget-view+json": {
20
- "model_id": "da88b797d92d44dba55077c5453eb1b4",
21
- "version_major": 2,
22
- "version_minor": 0
23
- },
24
- "text/plain": [
25
- "Fetching 7908 files: 0%| | 0/7908 [00:00<?, ?it/s]"
26
- ]
27
- },
28
- "metadata": {},
29
- "output_type": "display_data"
30
- },
31
- {
32
- "data": {
33
- "text/plain": [
34
- "'/mnt/d360c7ec-336f-4a33-832d-86d6562ba9ab/work/NCCR/requests/WhisperSeg/data/datasets/animals/raw'"
35
- ]
36
- },
37
- "execution_count": 8,
38
- "metadata": {},
39
- "output_type": "execute_result"
40
- }
41
- ],
42
- "source": [
43
- "from huggingface_hub import snapshot_download\n",
44
- "snapshot_download('nccratliri/vad-multi-species', local_dir = \"./\", repo_type=\"dataset\" )"
45
- ]
46
- },
47
- {
48
- "cell_type": "markdown",
49
- "id": "a926f0c1-2c1d-4b4b-b633-2c54c1fc8928",
50
- "metadata": {},
51
- "source": [
52
- "### Use a unified cluster name \"vocal\" for all species, to train a general-purpose VAD model"
53
- ]
54
- },
55
- {
56
- "cell_type": "code",
57
- "execution_count": 1,
58
- "id": "dfd9b300-ac32-422a-b687-6a21f8d0bea9",
59
- "metadata": {},
60
- "outputs": [],
61
- "source": [
62
- "from glob import glob\n",
63
- "import json\n",
64
- "import os"
65
- ]
66
- },
67
- {
68
- "cell_type": "code",
69
- "execution_count": 4,
70
- "id": "a9ab98c2-d445-4004-afb9-77614075005f",
71
- "metadata": {},
72
- "outputs": [
73
- {
74
- "data": {
75
- "text/plain": [
76
- "3953"
77
- ]
78
- },
79
- "execution_count": 4,
80
- "metadata": {},
81
- "output_type": "execute_result"
82
- }
83
- ],
84
- "source": [
85
- "csv_file_list = glob(\"./t*/*.json\")\n",
86
- "len(csv_file_list)"
87
- ]
88
- },
89
- {
90
- "cell_type": "code",
91
- "execution_count": 5,
92
- "id": "7b92691e-7dbc-482f-93e1-05054c847196",
93
- "metadata": {},
94
- "outputs": [
95
- {
96
- "data": {
97
- "text/plain": [
98
- "160"
99
- ]
100
- },
101
- "execution_count": 5,
102
- "metadata": {},
103
- "output_type": "execute_result"
104
- }
105
- ],
106
- "source": [
107
- "n_removed = 0\n",
108
- "for csv_name in csv_file_list:\n",
109
- " data = json.load(open(csv_name, \"r\"))\n",
110
- " if data[\"species\"] == \"human\":\n",
111
- " audio_name = csv_name[:-4] + \"wav\"\n",
112
- " os.remove( audio_name )\n",
113
- " os.remove( csv_name )\n",
114
- " n_removed +=1\n",
115
- " else:\n",
116
- " data[\"species\"] = \"animal\"\n",
117
- " data[\"cluster\"] = [ \"vocal\" for _ in data[\"cluster\"] ]\n",
118
- " json.dump( data, open( csv_name, \"w\") )\n",
119
- "n_removed"
120
- ]
121
- },
122
- {
123
- "cell_type": "code",
124
- "execution_count": null,
125
- "id": "d5bf7554-7faf-486b-9ec9-5f3ccc025757",
126
- "metadata": {},
127
- "outputs": [],
128
- "source": []
129
- }
130
- ],
131
- "metadata": {
132
- "kernelspec": {
133
- "display_name": "Python 3 (ipykernel)",
134
- "language": "python",
135
- "name": "python3"
136
- },
137
- "language_info": {
138
- "codemirror_mode": {
139
- "name": "ipython",
140
- "version": 3
141
- },
142
- "file_extension": ".py",
143
- "mimetype": "text/x-python",
144
- "name": "python",
145
- "nbconvert_exporter": "python",
146
- "pygments_lexer": "ipython3",
147
- "version": "3.10.13"
148
- }
149
- },
150
- "nbformat": 4,
151
- "nbformat_minor": 5
152
- }