metadata
{}
LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct
Repository Overview
This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.
Training Strategy
- Pretraining: Only Vision-to-Language projector is trained. The rest of the model is frozen.
- Fine-tuning: All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.
- Note: During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following S2-Wrapper.
Key Components
- Base Large Language Model (LLM): Meta-Llama-3-8B-Instruct
- Base Large Multimodal Model (LMM): LLaVA-v1.5
Training Data
- Pretraining Dataset: LCS-558K
- Fine-tuning Dataset: LLaVA-Instruct-665K
Download It As
git lfs install
git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2
Contributions
Contributions are welcome! Please 🌟 our repository LLaVA++ if you find this model useful.