gpt-omni ylacombe HF staff commited on
Commit
ed19d73
1 Parent(s): b3a6952

Update README.md (#4)

Browse files

- Update README.md (2706f2188d24428e1a6bfa6e4702001a58c67d09)


Co-authored-by: Yoach Lacombe <ylacombe@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -3,6 +3,9 @@ license: mit
3
  language:
4
  - en
5
  base_model: Qwen/Qwen2-0.5B
 
 
 
6
  ---
7
 
8
 
@@ -33,4 +36,67 @@ Mini-Omni is an open-source multimodel large language model that can **hear, tal
33
 
34
  ✅ With "Audio-to-Text" and "Audio-to-Audio" **batch inference** to further boost the performance.
35
 
36
- **NOTE**: please refer to https://github.com/gpt-omni/mini-omni for more details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  language:
4
  - en
5
  base_model: Qwen/Qwen2-0.5B
6
+ tags:
7
+ - text-to-speech
8
+ - speech-to-speech
9
  ---
10
 
11
 
 
36
 
37
  ✅ With "Audio-to-Text" and "Audio-to-Audio" **batch inference** to further boost the performance.
38
 
39
+ **NOTE**: please refer to the [code repository](https://github.com/gpt-omni/mini-omni) for more details.
40
+
41
+ ## Install
42
+
43
+ Create a new conda environment and install the required packages:
44
+
45
+ ```sh
46
+ conda create -n omni python=3.10
47
+ conda activate omni
48
+
49
+ git clone https://github.com/gpt-omni/mini-omni.git
50
+ cd mini-omni
51
+ pip install -r requirements.txt
52
+ ```
53
+
54
+ ## Quick start
55
+
56
+ **Interactive demo**
57
+
58
+ - start server
59
+ ```sh
60
+ conda activate omni
61
+ cd mini-omni
62
+ python3 server.py --ip '0.0.0.0' --port 60808
63
+ ```
64
+
65
+ - run streamlit demo
66
+
67
+ NOTE: you need to run streamlit locally with PyAudio installed.
68
+
69
+ ```sh
70
+ pip install PyAudio==0.2.14
71
+ API_URL=http://0.0.0.0:60808/chat streamlit run webui/omni_streamlit.py
72
+ ```
73
+
74
+ - run gradio demo
75
+ ```sh
76
+ API_URL=http://0.0.0.0:60808/chat python3 webui/omni_gradio.py
77
+ ```
78
+
79
+ example:
80
+
81
+ NOTE: need to unmute first. Gradio seems can not play audio stream instantly, so the latency feels a bit longer.
82
+
83
+ https://github.com/user-attachments/assets/29187680-4c42-47ff-b352-f0ea333496d9
84
+
85
+
86
+ **Local test**
87
+
88
+ ```sh
89
+ conda activate omni
90
+ cd mini-omni
91
+ # test run the preset audio samples and questions
92
+ python inference.py
93
+ ```
94
+
95
+ ## Acknowledgements
96
+
97
+ - [Qwen2](https://github.com/QwenLM/Qwen2/) as the LLM backbone.
98
+ - [litGPT](https://github.com/Lightning-AI/litgpt/) for training and inference.
99
+ - [whisper](https://github.com/openai/whisper/) for audio encoding.
100
+ - [snac](https://github.com/hubertsiuzdak/snac/) for audio decoding.
101
+ - [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) for generating synthetic speech.
102
+ - [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) and [MOSS](https://github.com/OpenMOSS/MOSS/tree/main) for alignment.