Spaces:
Nymbo
/
Running on Zero

MMAudio / README.md
Nymbo's picture
Update README.md
7983632 verified

A newer version of the Gradio SDK is available: 5.9.1

Upgrade
metadata
title: MMAudio
emoji: πŸ”Š
colorFrom: blue
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: true
short_description: Video to Audio

Taming Multimodal Joint Training for High-Quality Video-to-Audio Synthesis

Ho Kei Cheng, Masato Ishii, Akio Hayakawa, Takashi Shibuya, Alexander Schwing, Yuki Mitsufuji

University of Illinois Urbana-Champaign, Sony AI, and Sony Group Corporation

[Paper (being prepared)] [Project Page]

Note: This repository is still under construction. Single-example inference should work as expected. The training code will be added. Code is subject to non-backward-compatible changes.

Highlight

MMAudio generates synchronized audio given video and/or text inputs. Our key innovation is multimodal joint training which allows training on a wide range of audio-visual and audio-text datasets. Moreover, a synchronization module aligns the generated audio with the video frames.

Results

(All audio from our algorithm MMAudio)

Videos from Sora:

https://github.com/user-attachments/assets/82afd192-0cee-48a1-86ca-bd39b8c8f330

Videos from MovieGen/Hunyuan Video/VGGSound:

https://github.com/user-attachments/assets/29230d4e-21c1-4cf8-a221-c28f2af6d0ca

For more results, visit https://hkchengrex.com/MMAudio/video_main.html.

Installation

We have only tested this on Ubuntu.

Prerequisites

We recommend using a miniforge environment.

Clone our repository:

git clone https://github.com/hkchengrex/MMAudio.git

Install with pip:

cd MMAudio
pip install -e .

(If you encounter the File "setup.py" not found error, upgrade your pip with pip install --upgrade pip)

Pretrained models:

The models will be downloaded automatically when you run the demo script. MD5 checksums are provided in mmaudio/utils/download_utils.py

Model Download link File size
Flow prediction network, small 16kHz mmaudio_small_16k.pth 601M
Flow prediction network, small 44.1kHz mmaudio_small_44k.pth 601M
Flow prediction network, medium 44.1kHz mmaudio_medium_44k.pth 2.4G
Flow prediction network, large 44.1kHz (recommended) mmaudio_large_44k.pth 3.9G
16kHz VAE v1-16.pth 655M
16kHz BigVGAN vocoder best_netG.pt 429M
44.1kHz VAE v1-44.pth 1.2G
Synchformer visual encoder synchformer_state_dict.pth 907M

The 44.1kHz vocoder will be downloaded automatically.

The expected directory structure (full):

MMAudio
β”œβ”€β”€ ext_weights
β”‚   β”œβ”€β”€ best_netG.pt
β”‚   β”œβ”€β”€ synchformer_state_dict.pth
β”‚   β”œβ”€β”€ v1-16.pth
β”‚   └── v1-44.pth
β”œβ”€β”€ weights
β”‚   β”œβ”€β”€ mmaudio_small_16k.pth
β”‚   β”œβ”€β”€ mmaudio_small_44k.pth
β”‚   β”œβ”€β”€ mmaudio_medium_44k.pth
β”‚   └── mmaudio_large_44k.pth
└── ...

The expected directory structure (minimal, for the recommended model only):

MMAudio
β”œβ”€β”€ ext_weights
β”‚   β”œβ”€β”€ synchformer_state_dict.pth
β”‚   └── v1-44.pth
β”œβ”€β”€ weights
β”‚   └── mmaudio_large_44k.pth
└── ...

Demo

By default, these scripts use the large_44k model. In our experiments, inference only takes around 6GB of GPU memory (in 16-bit mode) which should fit in most modern GPUs.

Command-line interface

With demo.py

python demo.py --duration=8 --video=<path to video> --prompt "your prompt" 

The output (audio in .flac format, and video in .mp4 format) will be saved in ./output. See the file for more options. Simply omit the --video option for text-to-audio synthesis. The default output (and training) duration is 8 seconds. Longer/shorter durations could also work, but a large deviation from the training duration may result in a lower quality.

Gradio interface

Supports video-to-audio and text-to-audio synthesis.

python gradio_demo.py

Known limitations

  1. The model sometimes generates undesired unintelligible human speech-like sounds
  2. The model sometimes generates undesired background music
  3. The model struggles with unfamiliar concepts, e.g., it can generate "gunfires" but not "RPG firing".

We believe all of these three limitations can be addressed with more high-quality training data.

Training

Work in progress.

Evaluation

Work in progress.

Acknowledgement

Many thanks to: