LLaVA-Hound Model Card
Model details
Model type: LLaVA-Hound is an open-source video large multimodal model, fine-tuned from video instruction following data based on large language model.
This model is the SFT version on image and video instruction dataset trained from ShareGPTVideo/LLaVA-Hound-Pretrain.
Base LLM: lmsys/vicuna-7b-v1.5
Model date: Trained on March 15, 2024.
Paper or resources for more information:
Paper: https://huggingface.co/papers/2404.01258
Code: https://github.com/RifleZhang/LLaVA-Hound-DPO
License
lmsys/vicuna-7b-v1.5 license.
Where to send questions or comments about the model: https://github.com/RifleZhang/LLaVA-Hound-DPO/issues
Intended use
Primary intended uses: Video (image) instruction-following.
Primary intended users: Researchers in artificial intelligence, large multimodal model, etc.
Training dataset
ShareGPTVideo dataset.
Evaluation
Follow https://github.com/RifleZhang/LLaVA-Hound-DPO/blob/main/README.md
Paper
https://huggingface.co/papers/2404.01258
citation
@article{zhang2024direct,
title={Direct Preference Optimization of Video Large Multimodal Models from Language Model Reward},
author={Zhang, Ruohong and Gui, Liangke and Sun, Zhiqing and Feng, Yihao and Xu, Keyang and Zhang, Yuanhan and Fu, Di and Li, Chunyuan and Hauptmann, Alexander and Bisk, Yonatan and others},
journal={arXiv preprint arXiv:2404.01258},
year={2024}
}
- Downloads last month
- 1,046