T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs

💻 GitHub   |    📑 Paper   

Model Summary

License

Model License

  • The code in this repo is released under the Apache-2.0 License.
  • The usage of MiniCPM-V series model weights must strictly follow MiniCPM Model License.md.
  • The models and weights of MiniCPM are completely free for academic research. After filling out a "questionnaire" for registration, are also available for free commercial use.

Statement

  • As an LLM, MiniCPM-Llama3-V 2.5 generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-Llama3-V 2.5 does not represent the views and positions of the model developers
  • We will not be liable for any problems arising from the use of the MinCPM-V open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.

Training dataset

  • 100K video instruction data from Video-ChatGPT
  • 100K video caption data from ShareGemini
Downloads last month
5
Safetensors
Model size
8.54B params
Tensor type
BF16
·
Inference API
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for xjtupanda/MiniCPM-V-200K-video-finetune

Finetuned
(4)
this model

Datasets used to train xjtupanda/MiniCPM-V-200K-video-finetune

Collection including xjtupanda/MiniCPM-V-200K-video-finetune