Abstract
The salient multimodal capabilities and interactive experience of GPT-4o highlight its critical role in practical applications, yet it lacks a high-performing open-source counterpart. In this paper, we introduce Baichuan-Omni, the first open-source 7B Multimodal Large Language Model (MLLM) adept at concurrently processing and analyzing modalities of image, video, audio, and text, while delivering an advanced multimodal interactive experience and strong performance. We propose an effective multimodal training schema starting with 7B model and proceeding through two stages of multimodal alignment and multitask fine-tuning across audio, image, video, and text modal. This approach equips the language model with the ability to handle visual and audio data effectively. Demonstrating strong performance across various omni-modal and multimodal benchmarks, we aim for this contribution to serve as a competitive baseline for the open-source community in advancing multimodal understanding and real-time interaction.
Community
@librarian-bot recommend
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OmniBench: Towards The Future of Universal Omni-Language Models (2024)
- Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming (2024)
- MIO: A Foundation Model on Multimodal Tokens (2024)
- Aria: An Open Multimodal Native Mixture-of-Experts Model (2024)
- TC-LLaVA: Rethinking the Transfer from Image to Video Understanding with Temporal Considerations (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hi @kenshinn , congrats on the paper! Excited to see Baichuan’s new open model🔥Is it possible to share an approximate release date?
Thanks for the attention! Currently, we're undergoing internal security assessments. Once the review is complete, we will be able to provide more information about the release date. Stay tuned! 🎆
Thank you for your attention! Our project was ready in mid-September, and at that time, we didn't come across your work. As of now, we may not be able to incorporate it into our citations. But we are thrilled to see the advancements in the field and both our works contribute to open-source omni-modal models! 💪
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper