aheba31 commited on
Commit
a1958b0
1 Parent(s): c28f344

commit predictor test

Browse files
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "en"
3
+ thumbnail:
4
+ tags:
5
+ - speechbrain
6
+ - embeddings
7
+ - Speaker
8
+ - Verification
9
+ - Identification
10
+ - pytorch
11
+ - ECAPA
12
+ - TDNN
13
+ license: "apache-2.0"
14
+ datasets:
15
+ - voxceleb
16
+ metrics:
17
+ - EER
18
+ widget:
19
+ - label: VoxCeleb Speaker id10003
20
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav
21
+ - label: VoxCeleb Speaker id10004
22
+ src: https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav
23
+ ---
24
+
25
+ <iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
26
+ <br/><br/>
27
+
28
+ # Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
29
+
30
+ This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
31
+ The system can be used to extract speaker embeddings as well.
32
+ It is trained on Voxceleb 1+ Voxceleb2 training data.
33
+
34
+ For a better experience, we encourage you to learn more about
35
+ [SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
36
+
37
+ | Release | EER(%) | minDCF |
38
+ |:-------------:|:--------------:|:--------------:|
39
+ | 05-03-21 | 0.69 | 0.08258 |
40
+
41
+
42
+ ## Pipeline description
43
+
44
+ This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
45
+
46
+ ## Install SpeechBrain
47
+
48
+ First of all, please install SpeechBrain with the following command:
49
+
50
+ ```
51
+ gh repo clone aheba/speechbrain-aheba-contribs
52
+ git checkout pretrain_new
53
+ pip install -r requirements.txt
54
+ pip install --editable .
55
+ ```
56
+
57
+ Please notice that we encourage you to read our tutorials and learn more about
58
+ [SpeechBrain](https://speechbrain.github.io).
59
+
60
+ ### Compute your speaker embeddings
61
+
62
+ ```python
63
+ import torchaudio
64
+ from speechbrain.pretrained import Predictor
65
+ classifier = Predictor.import_model(source="aheba31/test-predictor")
66
+ signal, fs = torchaudio.load('samples/audio_samples/example1.wav')
67
+ embeddings = classifier.encode_batch(signal)
68
+ ```
69
+
70
+ ### Perform Speaker Verification
71
+
72
+ ```python
73
+ from speechbrain.pretrained import SpeakerRecognition
74
+ verification = SpeakerRecognition.from_hparams(source="aheba31/test-predictor", savedir="aheba31/test-predictor")
75
+ score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-voxceleb/example1.wav", "speechbrain/spkrec-ecapa-voxceleb/example2.flac")
76
+ ```
77
+ The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
78
+
79
+ ### Inference on GPU
80
+ To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
81
+
82
+ ### Training
83
+ The model was trained with SpeechBrain (aa018540).
84
+ To train it from scratch follows these steps:
85
+ 1. Clone SpeechBrain:
86
+ ```bash
87
+ git clone https://github.com/speechbrain/speechbrain/
88
+ ```
89
+ 2. Install it:
90
+ ```
91
+ cd speechbrain
92
+ pip install -r requirements.txt
93
+ pip install -e .
94
+ ```
95
+
96
+ 3. Run Training:
97
+ ```
98
+ cd recipes/VoxCeleb/SpeakerRec
99
+ python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
100
+ ```
101
+
102
+ You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
103
+
104
+ ### Limitations
105
+ The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
106
+
107
+ #### Referencing ECAPA-TDNN
108
+ ```
109
+ @inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
110
+ author = {Brecht Desplanques and
111
+ Jenthe Thienpondt and
112
+ Kris Demuynck},
113
+ editor = {Helen Meng and
114
+ Bo Xu and
115
+ Thomas Fang Zheng},
116
+ title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
117
+ in {TDNN} Based Speaker Verification},
118
+ booktitle = {Interspeech 2020},
119
+ pages = {3830--3834},
120
+ publisher = {{ISCA}},
121
+ year = {2020},
122
+ }
123
+ ```
124
+
125
+ # **Citing SpeechBrain**
126
+ Please, cite SpeechBrain if you use it for your research or business.
127
+
128
+ ```bibtex
129
+ @misc{speechbrain,
130
+ title={{SpeechBrain}: A General-Purpose Speech Toolkit},
131
+ author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
132
+ year={2021},
133
+ eprint={2106.04624},
134
+ archivePrefix={arXiv},
135
+ primaryClass={eess.AS},
136
+ note={arXiv:2106.04624}
137
+ }
138
+ ```
139
+
140
+ # **About SpeechBrain**
141
+ - Website: https://speechbrain.github.io/
142
+ - Code: https://github.com/speechbrain/speechbrain/
143
+ - HuggingFace: https://huggingface.co/speechbrain/
classifier.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd9e3634fe68bd0a427c95e354c0c677374f62b3f434e45b78599950d860d535
3
+ size 5534328
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {
2
+ "speechbrain_interface": "SpeakerRecognition"
3
+ }
embedding_model.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0575cb64845e6b9a10db9bcb74d5ac32b326b8dc90352671d345e2ee3d0126a2
3
+ size 83316686
example1.wav ADDED
Binary file (104 kB). View file
 
example2.flac ADDED
Binary file (39.6 kB). View file
 
hyperparams.yaml ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ############################################################################
2
+ # Model: ECAPA big for Speaker verification
3
+ # ############################################################################
4
+
5
+ # Hparams NEEDED
6
+ HPARAMS_NEEDED: ["label_encoder"]
7
+ # Modules Needed
8
+ MODULES_NEEDED: ["compute_features", "mean_var_norm", "embedding_model", "classifier"]
9
+
10
+ # Feature parameters
11
+ n_mels: 80
12
+
13
+ # Pretrain folder (HuggingFace)
14
+ pretrained_path: speechbrain/spkrec-ecapa-voxceleb
15
+
16
+ # Output parameters
17
+ out_n_neurons: 7205
18
+
19
+ # Model params
20
+ compute_features: !new:speechbrain.lobes.features.Fbank
21
+ n_mels: !ref <n_mels>
22
+
23
+ mean_var_norm: !new:speechbrain.processing.features.InputNormalization
24
+ norm_type: sentence
25
+ std_norm: False
26
+
27
+ embedding_model: !new:speechbrain.lobes.models.ECAPA_TDNN.ECAPA_TDNN
28
+ input_size: !ref <n_mels>
29
+ channels: [1024, 1024, 1024, 1024, 3072]
30
+ kernel_sizes: [5, 3, 3, 3, 1]
31
+ dilations: [1, 2, 3, 4, 1]
32
+ attention_channels: 128
33
+ lin_neurons: 192
34
+
35
+ classifier: !new:speechbrain.lobes.models.ECAPA_TDNN.Classifier
36
+ input_size: 192
37
+ out_neurons: !ref <out_n_neurons>
38
+
39
+ mean_var_norm_emb: !new:speechbrain.processing.features.InputNormalization
40
+ norm_type: global
41
+ std_norm: False
42
+ update_until_epoch: -1 # Freeze the normalization
43
+
44
+ modules:
45
+ compute_features: !ref <compute_features>
46
+ mean_var_norm: !ref <mean_var_norm>
47
+ embedding_model: !ref <embedding_model>
48
+ mean_var_norm_emb: !ref <mean_var_norm_emb>
49
+ classifier: !ref <classifier>
50
+
51
+ label_encoder: !new:speechbrain.dataio.encoder.CategoricalEncoder
52
+
53
+
54
+ pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
55
+ loadables:
56
+ embedding_model: !ref <embedding_model>
57
+ mean_var_norm_emb: !ref <mean_var_norm_emb>
58
+ classifier: !ref <classifier>
59
+ label_encoder: !ref <label_encoder>
60
+ paths:
61
+ embedding_model: !ref <pretrained_path>/embedding_model.ckpt
62
+ mean_var_norm_emb: !ref <pretrained_path>/mean_var_norm_emb.ckpt
63
+ classifier: !ref <pretrained_path>/classifier.ckpt
64
+ label_encoder: !ref <pretrained_path>/label_encoder.txt
65
+
inference.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ def forward(self, wavs, wav_lens=None):
3
+ """Runs the classification"""
4
+ return self.classify_batch(wavs, wav_lens)
5
+
6
+ def encode_batch(self, wavs, wav_lens=None, normalize=False):
7
+ """Encodes the input audio into a single vector embedding.
8
+
9
+ The waveforms should already be in the model's desired format.
10
+ You can call:
11
+ ``normalized = <this>.normalizer(signal, sample_rate)``
12
+ to get a correctly converted signal in most cases.
13
+
14
+ Arguments
15
+ ---------
16
+ wavs : torch.tensor
17
+ Batch of waveforms [batch, time, channels] or [batch, time]
18
+ depending on the model. Make sure the sample rate is fs=16000 Hz.
19
+ wav_lens : torch.tensor
20
+ Lengths of the waveforms relative to the longest one in the
21
+ batch, tensor of shape [batch]. The longest one should have
22
+ relative length 1.0 and others len(waveform) / max_length.
23
+ Used for ignoring padding.
24
+ normalize : bool
25
+ If True, it normalizes the embeddings with the statistics
26
+ contained in mean_var_norm_emb.
27
+
28
+ Returns
29
+ -------
30
+ torch.tensor
31
+ The encoded batch
32
+ """
33
+ # Manage single waveforms in input
34
+ if len(wavs.shape) == 1:
35
+ wavs = wavs.unsqueeze(0)
36
+
37
+ # Assign full length if wav_lens is not assigned
38
+ if wav_lens is None:
39
+ wav_lens = torch.ones(wavs.shape[0], device=self.device)
40
+
41
+ # Storing waveform in the specified device
42
+ wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device)
43
+ wavs = wavs.float()
44
+
45
+ # Computing features and embeddings
46
+ feats = self.mods.compute_features(wavs)
47
+ feats = self.mods.mean_var_norm(feats, wav_lens)
48
+ embeddings = self.mods.embedding_model(feats, wav_lens)
49
+ if normalize:
50
+ embeddings = self.hparams.mean_var_norm_emb(
51
+ embeddings, torch.ones(embeddings.shape[0], device=self.device)
52
+ )
53
+ return embeddings
54
+
55
+ def classify_batch(self, wavs, wav_lens=None):
56
+ """Performs classification on the top of the encoded features.
57
+
58
+ It returns the posterior probabilities, the index and, if the label
59
+ encoder is specified it also the text label.
60
+
61
+ Arguments
62
+ ---------
63
+ wavs : torch.tensor
64
+ Batch of waveforms [batch, time, channels] or [batch, time]
65
+ depending on the model. Make sure the sample rate is fs=16000 Hz.
66
+ wav_lens : torch.tensor
67
+ Lengths of the waveforms relative to the longest one in the
68
+ batch, tensor of shape [batch]. The longest one should have
69
+ relative length 1.0 and others len(waveform) / max_length.
70
+ Used for ignoring padding.
71
+
72
+ Returns
73
+ -------
74
+ out_prob
75
+ The log posterior probabilities of each class ([batch, N_class])
76
+ score:
77
+ It is the value of the log-posterior for the best class ([batch,])
78
+ index
79
+ The indexes of the best class ([batch,])
80
+ text_lab:
81
+ List with the text labels corresponding to the indexes.
82
+ (label encoder should be provided).
83
+ """
84
+ emb = self.encode_batch(wavs, wav_lens)
85
+ out_prob = self.mods.classifier(emb).squeeze(1)
86
+ score, index = torch.max(out_prob, dim=-1)
87
+ text_lab = self.hparams.label_encoder.decode_torch(index)
88
+ return out_prob, score, index, text_lab
89
+
90
+
91
+ def classify_file(self, path):
92
+ """Classifies the given audiofile into the given set of labels.
93
+
94
+ Arguments
95
+ ---------
96
+ path : str
97
+ Path to audio file to classify.
98
+
99
+ Returns
100
+ -------
101
+ out_prob
102
+ The log posterior probabilities of each class ([batch, N_class])
103
+ score:
104
+ It is the value of the log-posterior for the best class ([batch,])
105
+ index
106
+ The indexes of the best class ([batch,])
107
+ text_lab:
108
+ List with the text labels corresponding to the indexes.
109
+ (label encoder should be provided).
110
+ """
111
+ waveform = self.load_audio(path)
112
+ # Fake a batch:
113
+ batch = waveform.unsqueeze(0)
114
+ rel_length = torch.tensor([1.0])
115
+ emb = self.encode_batch(batch, rel_length)
116
+ out_prob = self.mods.classifier(emb).squeeze(1)
117
+ score, index = torch.max(out_prob, dim=-1)
118
+ text_lab = self.hparams.label_encoder.decode_torch(index)
119
+ return out_prob, score, index, text_lab
label_encoder.txt ADDED
The diff for this file is too large to render. See raw diff
 
mean_var_norm_emb.ckpt ADDED
Binary file (1.92 kB). View file