mmaaz60 commited on
Commit
c7a0601
1 Parent(s): 86cf8a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -1
README.md CHANGED
@@ -1,3 +1,44 @@
1
  ---
2
- license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ {}
3
  ---
4
+
5
+ [![CODE](https://img.shields.io/badge/GitHub-Repository-<COLOR>)](https://github.com/mbzuai-oryx/LLaVA-pp)
6
+
7
+ # LLaMA-3-V: Extending the Visual Capabilities of LLaVA with Meta-Llama-3-8B-Instruct
8
+
9
+ ## Repository Overview
10
+
11
+ This repository features LLaVA v1.5 trained with the Meta-Llama-3-8B-Instruct LLM. This integration aims to leverage the strengths of both models to offer advanced vision-language understanding.
12
+
13
+ ## Training Strategy
14
+
15
+ - **Pretraining:** Only Vision-to-Language projector is trained. The rest of the model is frozen.
16
+ - **Fine-tuning:** All model parameters including LLM are fine-tuned. Only the vision-backbone (CLIP) is kept frozen.
17
+ - **Note:** During both pretraining and fine-tuning, the vision-backbone (CLIP) is augmented with multi-scale features following [S2-Wrapper](https://arxiv.org/abs/2403.13043).
18
+ ## Key Components
19
+
20
+ - **Base Large Language Model (LLM):** [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
21
+ - **Base Large Multimodal Model (LMM):** [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA)
22
+
23
+ ## Training Data
24
+
25
+ - **Pretraining Dataset:** [LCS-558K](https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain)
26
+ - **Fine-tuning Dataset:** [LLaVA-Instruct-665K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/blob/main/llava_v1_5_mix665k.json)
27
+
28
+
29
+ ## Download It As
30
+
31
+ ```
32
+ git lfs install
33
+ git clone https://huggingface.co/MBZUAI/LLaVA-Meta-Llama-3-8B-Instruct-FT-S2
34
+ ```
35
+
36
+ ---
37
+
38
+
39
+ ## Contributions
40
+
41
+ Contributions are welcome! Please 🌟 our repository [LLaVA++](https://github.com/mbzuai-oryx/LLaVA-pp) if you find this model useful.
42
+
43
+ ---
44
+