---
language: "en"
inference: false
tags:
- Vocoder
- HiFIGAN
- speech-synthesis
- speechbrain
license: "apache-2.0"
datasets:
- Libri-TTS
---
# Vocoder with HiFIGAN Unit
## Work In Progress ....
## Install SpeechBrain
First of all, please install tranformers and SpeechBrain with the following command:
```
pip install speechbrain transformers
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Using the Vocoder
```python
import torchaudio
from speechbrain.inference.encoders import MelSpectrogramEncoder
from speechbrain.inference.vocoders import UnitHIFIGAN
from speechbrain.lobes.models.huggingface_transformers.discrete_hubert import (
DiscreteHuBERT,
)
speaker_encoder_source = "speechbrain/spkrec-ecapa-voxceleb-mel-spec"
speech_encoder_source = "facebook/hubert-base-ls960"
kmeans_folder = "speechbrain/SSL_Quantization"
kmeans_filename = "LJSpeech_hubert_k128_L7.pt"
layer = 7
vocoder_source = "chaanks/hifigan-unit-hubert-l7-k128-ljspeech-libritts"
save_path = "tmpdir"
device = "cuda"
sample_rate = 16000
wav = "chaanks/hifigan-unit-hubert-l7-k128-ljspeech-libritts/test.wav"
speaker_encoder = MelSpectrogramEncoder.from_hparams(
source=speaker_encoder_source,
run_opts={"device": str(device)},
savedir=save_path + "/spk_encoder",
)
speech_encoder = DiscreteHuBERT(
source=speech_encoder_source,
save_path=save_path + "/speech_encoder",
kmeans_filename=kmeans_filename,
kmeans_cache_dir=save_path + "/kmeans",
kmeans_repo_id=kmeans_folder,
output_norm=False,
freeze=True,
freeze_feature_extractor=True,
apply_spec_augment=False,
output_all_hiddens=True,
ssl_layer_num=layer,
).to(device)
vocoder = UnitHIFIGAN.from_hparams(
source=vocoder_source,
run_opts={"device": str(device)},
savedir=save_path + "/vocoder",
)
audio = speaker_encoder.load_audio(wav)
audio = audio.to(device)
spk = speaker_encoder.encode_waveform(audio)
_, codes = speech_encoder(audio.unsqueeze(0))
waveform = vocoder.decode_unit(codes.squeeze(0), spk=spk.reshape(-1))
torchaudio.save("test.wav", waveform.cpu(), sample_rate=sample_rate)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```
@misc{SB2021,
author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
title = {SpeechBrain},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\\\url{https://github.com/speechbrain/speechbrain}},
}
```
#### About SpeechBrain
SpeechBrain is an open-source and all-in-one speech toolkit. It is designed to be simple, extremely flexible, and user-friendly. Competitive or state-of-the-art performance is obtained in various domains.
Website: https://speechbrain.github.io/
GitHub: https://github.com/speechbrain/speechbrain