File size: 4,725 Bytes
45cc0ed
 
49f474f
 
4cfc104
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45cc0ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
license: mit
pipeline_tag: any-to-any
library_name: mini-omni2
datasets:
- nvidia/OpenMathInstruct-2
language:
- ar
metrics:
- accuracy
base_model:
- microsoft/OmniParser
new_version: rhymes-ai/Aria
tags:
- music
- finance
- legal
- code
- chemistry
- not-for-all-audiences
- art
---

# Mini-Omni2

<!-- <p align="center">
    <img src="./data/figures/title.png" width="100%"/>
</p> -->


<p align="center">
πŸ€— <a href="https://huggingface.co/gpt-omni/mini-omni2">Hugging Face</a>   | πŸ“– <a href="https://github.com/gpt-omni/mini-omni2">Github</a> 
|     πŸ“‘ <a href="https://arxiv.org/abs/2410.11190">Technical report</a> 
</p>

Mini-Omni2 is an **omni-interactive** model. It can **understand image, audio and text inputs and has end-to-end voice conversations with users**. Featuring **real-time voice output**, **omni-capable multimodal understanding** and flexible interaction **ability with interruption mechanism while speaking**.

<p align="center">
    <img src="./data/figures/framework.jpeg" width="100%"/>
</p>


## Updates

- **2024.10:** Release the model, technical report, inference and chat demo code.

## Features
βœ… **Multimodal interaction**: with the ability to understand images, speech and text, just like GPT-4o.

βœ… **Real-time speech-to-speech** conversational capabilities. No extra ASR or TTS models required, just like [Mini-Omni](https://github.com/gpt-omni/mini-omni).

<!-- βœ… **Streaming audio output**: with first-chunk latency of audio stream less than 0.3s. -->

<!-- βœ… **Duplex interaction**: hearing while speaking, it can be interrupted by key words like "stop omni". -->


## Demo

NOTE: need to unmute first.

https://github.com/user-attachments/assets/ad97ca7f-f8b4-40c3-a7e8-fa54b4edf155


## ToDo
- [ ] update interruption mechanism


## Install

Create a new conda environment and install the required packages:

```sh
conda create -n omni python=3.10
conda activate omni

git clone https://github.com/gpt-omni/mini-omni2.git
cd mini-omni2
pip install -r requirements.txt
```

## Quick start

**Interactive demo**

- start server

NOTE: you need to start the server before running the streamlit or gradio demo with API_URL set to the server address.

```sh
sudo apt-get install ffmpeg
conda activate omni
cd mini-omni2
python3 server.py --ip '0.0.0.0' --port 60808
```


- run streamlit demo

NOTE: you need to run streamlit **locally** with PyAudio installed. 

```sh
pip install PyAudio==0.2.14
API_URL=http://0.0.0.0:60808/chat streamlit run webui/omni_streamlit.py
```


**Local test**

```sh
conda activate omni
cd mini-omni2
# test run the preset audio samples and questions
python inference_vision.py
```

## Mini-Omni2 Overview

**1. Multimodal Modeling**:
We use multiple sequences as the input and output of the model. In the input part, we will concatenate image, audio and text features to perform a series of comprehensive tasks, as shown in the following figures. In the output part, we use text-guided delayed parallel output to generate real-time speech responses.
<p align="center">
    <img src="./data/figures/inputids.png" width="100%"/>
</p>

**2. Multi-stage Training**:
We propose an efficient alignment training method and conduct encoder adaptation, modal alignment, and multimodal fine-tuning respectively in the three-stage training.
<p align="center">
    <img src="./data/figures/training.jpeg" width="100%"/>
</p>

<!-- **3. Cases**:
Here are more cases of Mini-Omni2:
<p align="center">
    <img src="./data/figures/samples.png" width="100%"/>
</p> -->

## FAQ

**1. Does the model support other languages?**

No, the model is only trained on English. However, as we use whisper as the audio encoder, the model can understand other languages which is supported by whisper (like chinese), but the output is only in English.

**2. Error: can not run streamlit in local browser, with remote streamlit server**
    
You need start streamlit **locally** with PyAudio installed.


## Acknowledgements 

- [Qwen2](https://github.com/QwenLM/Qwen2/) as the LLM backbone.
- [litGPT](https://github.com/Lightning-AI/litgpt/) for training and inference.
- [whisper](https://github.com/openai/whisper/)  for audio encoding.
- [clip](https://github.com/openai/CLIP)  for image encoding.
- [snac](https://github.com/hubertsiuzdak/snac/)  for audio decoding.
- [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) for generating synthetic speech.
- [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and [MOSS](https://github.com/OpenMOSS/MOSS/tree/main) for alignment.

<!-- ## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=gpt-omni/mini-omni2&type=Date)](https://star-history.com/#gpt-omni/mini-omni2&Date)