n-iv commited on
Commit
510c5e0
1 Parent(s): 682bd16

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -2
README.md CHANGED
@@ -22,7 +22,44 @@ configs:
22
  data_files:
23
  - split: train
24
  path: data/train-*
 
 
 
 
 
 
25
  ---
26
- # Dataset Card for "vocsim"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
22
  data_files:
23
  - split: train
24
  path: data/train-*
25
+ license: cc-by-4.0
26
+ task_categories:
27
+ - audio-classification
28
+ pretty_name: VocalSimilarity
29
+ size_categories:
30
+ - 100K<n<1M
31
  ---
32
+ ### Dataset Description
33
+
34
+ "Benchmarking embeddings for retrieval and discrimination of vocalizations in humans and songbirds".
35
+ This bechmarking aggregated dataset consists of a collection of vocalization samples from humans and songbirds.
36
+
37
+ ### Data Fields
38
+
39
+ 1. **Subset**: Specifies the subset/category of the dataset. It can indicate whether the sample is from humans or songbirds, and possibly more detailed categorization.
40
+
41
+ 2. **Audio**: Contains the audio sample.
42
+
43
+ 3. **Label**: Represents the label or class of the audio clip, indicating the type of vocalization or sound.
44
+
45
+ 4. **Speaker**: Identifies the speaker or source of the vocalization in the case of human datasets, or the individual bird in the case of songbird datasets.
46
+
47
+ ### Human Datasets
48
+
49
+ 1. **AMI**: The AMI Meeting Corpus comprises 100 hours of multi-modal meeting recordings, including audio data for utterances, words, and vocal sounds, alongside detailed speaker metadata.
50
+
51
+ 2. **TIMIT**: The TIMIT dataset contains manual phonetic transcriptions of utterances read by 630 English speakers with various dialects.
52
+
53
+ 3. **VocImSet**: The Vocal Imitation Set features recordings of 236 unique sound sources being imitated by 248 speakers.
54
+
55
+
56
+ ### Songbird Datasets
57
+
58
+ 1. **Tomka**: The Gold-Standard Zebrafinch dataset contains 48,059 vocalizations of 36 vocalization types from 4 zebra finches.
59
+
60
+ 2. **Nicholson**: The Bengalese finch song repository includes songs of four Bengalese finches recorded in the Sober lab at Emory University and manually clustered by two authors.
61
+
62
+ 3. **DAS**: The Deep Audio Segmenter Dataset features single male Bengalese finch songs, including 473 vocalizations of 6 vocalization types.
63
+
64
+ 4. **Elie**: Vocal repertoires from zebra finches, collected between 2011 and 2014 at the University of California Berkeley by Julie E Elie. This dataset contains 3,500 vocalizations from 50 individuals and 65 vocalization types.
65