File size: 9,844 Bytes
c7e14fd 4c84984 c7e14fd 4c84984 b9c58e1 fba52ca c7e14fd 4c84984 c7e14fd d944d42 c7e14fd d944d42 c7e14fd d944d42 c7e14fd d944d42 c7e14fd d944d42 c7e14fd d944d42 c7e14fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
---
license: apache-2.0
datasets:
- lmms-lab/ClothoAQA
- Loie/VGGSound
language:
- en
metrics:
- accuracy
pipeline_tag: visual-question-answering
library_name: transformers
tags:
- Audio-visual Question Answering
- Audio Question Answering
- multimodal large language model
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/ROs4bHIp4zJ7g7vzgUycu.png" width="150" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476">VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs</a></h3>
<h5 align="center"> If you like our project, please give us a star β on <a href="https://github.com/DAMO-NLP-SG/VideoLLaMA2">Github</a> for the latest update. </h2>
<p align="center"><video src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Wj7GuqQ0CB9JRoPo6_GoH.webm" width="800"></p>
## π° News
* **[2024.10.22]** Release checkpoints of [VideoLLaMA2.1-7B-AV](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-AV)
* **[2024.10.15]** Release checkpoints of [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) and [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F)
* **[2024.08.14]** Release checkpoints of [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) and [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B)
* **[2024.07.30]** Release checkpoints of [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) and [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B).
* **[2024.06.25]** π₯π₯ As of Jun 25, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [MLVU Leaderboard](https://github.com/JUNJIE99/MLVU?tab=readme-ov-file#trophy-mini-leaderboard).
* **[2024.06.18]** π₯π₯ As of Jun 18, our [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) is the **Top-1** ~7B-sized VideoLLM on the [VideoMME Leaderboard](https://video-mme.github.io/home_page.html#leaderboard).
* **[2024.06.17]** ππ Update technical report with the latest results and the missing references. If you have works closely related to VideoLLaMA 2 but not mentioned in the paper, feel free to let us know.
* **[2024.06.14]** π₯π₯ [Online Demo](https://huggingface.co/spaces/lixin4ever/VideoLLaMA2) is available.
* **[2024.06.03]** Release training, evaluation, and serving codes of VideoLLaMA 2.
## π Model Zoo
### Vision-Only Checkpoints
| Model Name | Type | Visual Encoder | Language Decoder | # Training Frames |
|:-------------------|:--------------:|:----------------|:------------------|:----------------------:|
| [VideoLLaMA2-7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 8 |
| [VideoLLaMA2-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-7B-16F) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) | 16 |
| [VideoLLaMA2-8x7B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-8x7B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-8x7B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8 |
| [VideoLLaMA2-72B-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B-Base) | Base | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2-72B](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2-72B) | Chat | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Qwen2-72B-Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | 8 |
| [VideoLLaMA2.1-7B-16F-Base](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F-Base) | Base | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
| [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F) | Chat | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | 16 |
### Audio-Visual Checkpoints
| Model Name | Type | Audio Encoder | Language Decoder |
|:-------------------|:--------------:|:----------------|:----------------------:|
| [VideoLLaMA2.1-7B-AV](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-AV) (**This Checkpoint**) | Chat | [Fine-tuned BEATs_iter3+(AS2M)(cpt2)](https://1drv.ms/u/s!AqeByhGUtINrgcpj8ujXH1YUtxooEg?e=E9Ncea) | [VideoLLaMA2.1-7B-16F](https://huggingface.co/DAMO-NLP-SG/VideoLLaMA2.1-7B-16F) |
## π Main Results
### Multi-Choice Video QA & Video Captioning
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/Z81Dl2MeVlg8wLbYOyTvI.png" width="800" "/></p>
### Open-Ended Video QA
<p><img src="https://cdn-uploads.huggingface.co/production/uploads/63913b120cf6b11c487ca31d/UoAr7SjbPSPe1z23HBsUh.png" width="800" "/></p>
### Multi-Choice & Open-Ended Audio QA
<p><img src="https://huggingface.co/YifeiXin/xin/resolve/main/VideoLLaMA2-audio.png" width="800" "/></p>
### Open-Ended Audio-Visual QA
<p><img src="https://huggingface.co/YifeiXin/xin/resolve/main/VideoLLaAM2.1-AV.png" width="800" "/></p>
## π€ Inference with VideoLLaMA2-AV
```python
import sys
sys.path.append('./')
from videollama2 import model_init, mm_infer
from videollama2.utils import disable_torch_init
import argparse
def inference(args):
model_path = args.model_path
model, processor, tokenizer = model_init(model_path)
if args.modal_type == "a":
model.model.vision_tower = None
elif args.modal_type == "v":
model.model.audio_tower = None
elif args.modal_type == "av":
pass
else:
raise NotImplementedError
# Audio-visual Inference
audio_video_path = "assets/00003491.mp4"
preprocess = processor['audio' if args.modal_type == "a" else "video"]
if args.modal_type == "a":
audio_video_tensor = preprocess(audio_video_path)
else:
audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False)
question = f"Please describe the video with audio information."
# Audio Inference
audio_video_path = "assets/bird-twitter-car.wav"
preprocess = processor['audio' if args.modal_type == "a" else "video"]
if args.modal_type == "a":
audio_video_tensor = preprocess(audio_video_path)
else:
audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False)
question = f"Please describe the audio."
# Video Inference
audio_video_path = "assets/output_v_1jgsRbGzCls.mp4"
preprocess = processor['audio' if args.modal_type == "a" else "video"]
if args.modal_type == "a":
audio_video_tensor = preprocess(audio_video_path)
else:
audio_video_tensor = preprocess(audio_video_path, va=True if args.modal_type == "av" else False)
question = f"What activity are the people practicing in the video?"
output = mm_infer(
audio_video_tensor,
question,
model=model,
tokenizer=tokenizer,
modal='audio' if args.modal_type == "a" else "video",
do_sample=False,
)
print(output)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--model-path', help='', , required=False, default='DAMO-NLP-SG/VideoLLaMA2.1-7B-AV')
parser.add_argument('--modal-type', choices=["a", "v", "av"], help='', required=True)
args = parser.parse_args()
inference(args)
```
## Citation
If you find VideoLLaMA useful for your research and applications, please cite using this BibTeX:
```bibtex
@article{damonlpsg2024videollama2,
title={VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs},
author={Cheng, Zesen and Leng, Sicong and Zhang, Hang and Xin, Yifei and Li, Xin and Chen, Guanzheng and Zhu, Yongxin and Zhang, Wenqi and Luo, Ziyang and Zhao, Deli and Bing, Lidong},
journal={arXiv preprint arXiv:2406.07476},
year={2024},
url = {https://arxiv.org/abs/2406.07476}
}
@article{damonlpsg2023videollama,
title = {Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding},
author = {Zhang, Hang and Li, Xin and Bing, Lidong},
journal = {arXiv preprint arXiv:2306.02858},
year = {2023},
url = {https://arxiv.org/abs/2306.02858}
}
``` |