initial_commit (#1)
Browse files- initial commit (cc5944e85eb6c8477201e69658b3e768f6db596b)
- README.md +143 -1
- hyperparams.yaml +94 -0
- llama3.ckpt +3 -0
- model.ckpt +3 -0
README.md
CHANGED
@@ -1,3 +1,145 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: "en"
|
3 |
+
thumbnail:
|
4 |
+
tags:
|
5 |
+
- speech-llm
|
6 |
+
- audio-llm
|
7 |
+
- speechbrain
|
8 |
+
- pytorch
|
9 |
+
license: "apache-2.0"
|
10 |
+
datasets:
|
11 |
+
- openasqa
|
12 |
+
- iemocap
|
13 |
+
- libritts
|
14 |
+
- audioset
|
15 |
+
- audioset-strong
|
16 |
+
- audiocaps
|
17 |
+
- vgg-sound
|
18 |
+
- voxceleb2
|
19 |
+
- cmu-mosei
|
20 |
+
- clotho
|
21 |
+
- fsd50k
|
22 |
+
- fma
|
23 |
+
inference: false
|
24 |
---
|
25 |
+
|
26 |
+
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
|
27 |
+
<br/><br/>
|
28 |
+
|
29 |
+
# LTU-AS is an Audio/Speech LLM trained on OpenASQA dataset.
|
30 |
+
|
31 |
+
This repository provides all the necessary tools to infer a specch llm using SpeechBrain. For more details please check the ltu-as [paper](https://arxiv.org/pdf/2309.14405).
|
32 |
+
|
33 |
+
|
34 |
+
For a better experience, we encourage you to learn more about [SpeechBrain](https://speechbrain.github.io). The model performance is evaluated on 5 different tasks:
|
35 |
+
|
36 |
+
| model | Emotion Recognition Iemocap (Acc) | ASR Librispeech test-clean (WER) | Audio Classification ESC-50 (Acc) | Age Prediction Voxceleb2-test (MAE) | Gender Classification Voxceleb2-test (F1) |
|
37 |
+
|:-----------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|:----------------------------------:|
|
38 |
+
| original model in the paper | 65.2% | 4.9% | 80.8% | 7.3 | 90.8% |
|
39 |
+
| our model | 69.5% | 1.45% (with Whisper large v3) | 76.6% | 6.67 | 98.8% |
|
40 |
+
|
41 |
+
|
42 |
+
## Pipeline description
|
43 |
+
|
44 |
+
1. A whisper encoder together with a TLTR (time- and layer-wise transformer) is used as an audio encoder to encode acoustic embeddings.
|
45 |
+
2. For speech, an external ASR system is used to get the spoken texts such as whisper-large-v3 used here.
|
46 |
+
3. The spoken text is added into a user prompt then transformed to a text embedding. It is then concatenated with the acoustic embedding from the first step and fed to a fine-tuned Llama3.
|
47 |
+
|
48 |
+
## Install SpeechBrain
|
49 |
+
|
50 |
+
First of all, please install the **development** version of SpeechBrain with the following command:
|
51 |
+
|
52 |
+
```
|
53 |
+
git clone https://github.com/speechbrain/speechbrain.git
|
54 |
+
cd speechbrain
|
55 |
+
pip install -r requirements.txt
|
56 |
+
pip install --editable .
|
57 |
+
```
|
58 |
+
|
59 |
+
Please notice that we encourage you to read our tutorials and learn more about
|
60 |
+
[SpeechBrain](https://speechbrain.github.io).
|
61 |
+
|
62 |
+
### Inference of LTU-AS
|
63 |
+
|
64 |
+
```python
|
65 |
+
from speechbrain.inference.multimodal import LTU_AS
|
66 |
+
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
|
67 |
+
|
68 |
+
ltu_as = LTU_AS.from_hparams(
|
69 |
+
source="speechbrain/speech-llm-LTU-AS-openasqa"
|
70 |
+
)
|
71 |
+
|
72 |
+
# whisper-large-v3 as ASR model, can be changed to any customised ASR model
|
73 |
+
device = "cuda:0" if torch.cuda.is_available() else "cpu"
|
74 |
+
torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
|
75 |
+
model_id = "openai/whisper-large-v3"
|
76 |
+
model = AutoModelForSpeechSeq2Seq.from_pretrained(
|
77 |
+
model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
|
78 |
+
)
|
79 |
+
model.to(device)
|
80 |
+
|
81 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
82 |
+
pipe = pipeline(
|
83 |
+
"automatic-speech-recognition",
|
84 |
+
model=model,
|
85 |
+
tokenizer=processor.tokenizer,
|
86 |
+
feature_extractor=processor.feature_extractor,
|
87 |
+
max_new_tokens=128,
|
88 |
+
chunk_length_s=30,
|
89 |
+
batch_size=16,
|
90 |
+
return_timestamps=False,
|
91 |
+
torch_dtype=torch_dtype,
|
92 |
+
device=device,
|
93 |
+
)
|
94 |
+
|
95 |
+
# start an inference loop:
|
96 |
+
while True:
|
97 |
+
audio_path = input("please enter the raw audio path : ")
|
98 |
+
instruction = input("please enter the instruction : ")
|
99 |
+
transcript = " " + pipe(audio_path)["text"]
|
100 |
+
predicted_words = ltu_as.generate_with_raw_audio(audio_path, instruction, transcript)[0]
|
101 |
+
print("\n")
|
102 |
+
print(predicted_words)
|
103 |
+
print("\n")
|
104 |
+
```
|
105 |
+
|
106 |
+
### Inference on GPU
|
107 |
+
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
|
108 |
+
|
109 |
+
### Training
|
110 |
+
The training information can be found [here]()
|
111 |
+
|
112 |
+
You can find our training results (models, logs, etc) [here]().
|
113 |
+
|
114 |
+
### Limitations
|
115 |
+
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
|
116 |
+
|
117 |
+
# **Citing LTU-AS**
|
118 |
+
```bibtex
|
119 |
+
@inproceedings{gong_ltuas,
|
120 |
+
title={Joint Audio and Speech Understanding},
|
121 |
+
author={Gong, Yuan and Liu, Alexander H and Luo, Hongyin, and Karlinsky, Leonid and Glass, James},
|
122 |
+
year={2023},
|
123 |
+
booktitle={2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
|
124 |
+
}
|
125 |
+
```
|
126 |
+
|
127 |
+
# **Citing SpeechBrain**
|
128 |
+
Please, cite SpeechBrain if you use it for your research or business.
|
129 |
+
|
130 |
+
```bibtex
|
131 |
+
@misc{speechbrain,
|
132 |
+
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
|
133 |
+
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
|
134 |
+
year={2021},
|
135 |
+
eprint={2106.04624},
|
136 |
+
archivePrefix={arXiv},
|
137 |
+
primaryClass={eess.AS},
|
138 |
+
note={arXiv:2106.04624}
|
139 |
+
}
|
140 |
+
```
|
141 |
+
|
142 |
+
# **About SpeechBrain**
|
143 |
+
- Website: https://speechbrain.github.io/
|
144 |
+
- Code: https://github.com/speechbrain/speechbrain/
|
145 |
+
- HuggingFace: https://huggingface.co/speechbrain/
|
hyperparams.yaml
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ################################
|
2 |
+
# Model: Whisper + TLTR + Audio_Proj + LLaMa3
|
3 |
+
# Authors: Yingzhi Wang 2024
|
4 |
+
# ################################
|
5 |
+
|
6 |
+
# URL for the LLAMA3 model and its save folder
|
7 |
+
llama_hub: meta-llama/Meta-Llama-3-8B-Instruct # lmsys/vicuna-7b-v1.5
|
8 |
+
llama3_folder: llama3_checkpoint
|
9 |
+
|
10 |
+
# llama generation config
|
11 |
+
num_beams: 3
|
12 |
+
max_new_tokens: 400
|
13 |
+
top_k: 500
|
14 |
+
top_p: 0.95
|
15 |
+
temperature: 0.1
|
16 |
+
repetition_penalty: 1.1
|
17 |
+
|
18 |
+
# lora config
|
19 |
+
lora_dropout: 0.05
|
20 |
+
lora_alpha: 16
|
21 |
+
r: 8
|
22 |
+
bias: "none"
|
23 |
+
task_type: "CAUSAL_LM"
|
24 |
+
lora_target_modules: ["q_proj", "v_proj"]
|
25 |
+
|
26 |
+
# URL for whisper model.
|
27 |
+
whisper_hub: openai/whisper-large
|
28 |
+
whisper_folder: whisper_checkpoint
|
29 |
+
freeze_whisper: True
|
30 |
+
whisper_output_dim: 1280
|
31 |
+
|
32 |
+
# average pooling
|
33 |
+
pooling_kernel: 20
|
34 |
+
|
35 |
+
# Audio Tagging model
|
36 |
+
tltr_layers: 32
|
37 |
+
llama_hidden_size: 4096
|
38 |
+
|
39 |
+
# Masks
|
40 |
+
audio_padding_mask: !name:speechbrain.dataio.dataio.length_to_mask
|
41 |
+
text_padding_mask: !name:speechbrain.lobes.models.transformer.Transformer.get_key_padding_mask
|
42 |
+
|
43 |
+
whisper: !new:speechbrain.lobes.models.huggingface_transformers.whisper.Whisper
|
44 |
+
source: !ref <whisper_hub>
|
45 |
+
freeze: !ref <freeze_whisper>
|
46 |
+
save_path: !ref <whisper_folder>
|
47 |
+
encoder_only: True
|
48 |
+
output_all_hiddens: True
|
49 |
+
|
50 |
+
avg_pool: !new:speechbrain.nnet.pooling.Pooling1d
|
51 |
+
pool_type: "avg"
|
52 |
+
kernel_size: !ref <pooling_kernel>
|
53 |
+
|
54 |
+
tltr: !new:speechbrain.lobes.models.TLTR.AT_MODEL
|
55 |
+
n_layer: !ref <tltr_layers>
|
56 |
+
rep_dim: !ref <whisper_output_dim>
|
57 |
+
freeze: True
|
58 |
+
|
59 |
+
audio_proj: !new:speechbrain.lobes.models.TLTR.AudioProjection
|
60 |
+
input_size: !ref <whisper_output_dim>
|
61 |
+
hidden_size: !ref <llama_hidden_size>
|
62 |
+
|
63 |
+
#LLAMA3 model
|
64 |
+
# llama3: null
|
65 |
+
llama3: !new:speechbrain.lobes.models.huggingface_transformers.llama2.LLAMA2
|
66 |
+
source: !ref <llama_hub>
|
67 |
+
freeze: True
|
68 |
+
save_path: !ref <llama3_folder>
|
69 |
+
max_new_tokens: !ref <max_new_tokens>
|
70 |
+
num_beams: !ref <num_beams>
|
71 |
+
top_k: !ref <top_k>
|
72 |
+
top_p: !ref <top_p>
|
73 |
+
temperature: !ref <temperature>
|
74 |
+
repetition_penalty: !ref <repetition_penalty>
|
75 |
+
with_peft: True
|
76 |
+
lora_alpha: !ref <lora_alpha>
|
77 |
+
lora_dropout: !ref <lora_dropout>
|
78 |
+
r: !ref <r>
|
79 |
+
bias: !ref <bias>
|
80 |
+
task_type: !ref <task_type>
|
81 |
+
lora_target_modules: !ref <lora_target_modules>
|
82 |
+
|
83 |
+
modules:
|
84 |
+
tltr: !ref <tltr>
|
85 |
+
audio_proj: !ref <audio_proj>
|
86 |
+
llama3: !ref <llama3>
|
87 |
+
|
88 |
+
model: !new:torch.nn.ModuleList
|
89 |
+
- [!ref <tltr>, !ref <audio_proj>]
|
90 |
+
|
91 |
+
pretrainer: !new:speechbrain.utils.parameter_transfer.Pretrainer
|
92 |
+
loadables:
|
93 |
+
llama3: !ref <llama3>
|
94 |
+
model: !ref <model>
|
llama3.ckpt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ed51c760725b7e5b788e2e89faa2f39fef96f3d220a7216ff47e99ad9771036e
|
3 |
+
size 32134919365
|
model.ckpt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e47c0a78324be1c051933bf6b5c9e7f49173e6a93a11e91dfb0c5adb6c0b342
|
3 |
+
size 181118618
|