nnethercott
commited on
Commit
•
d33e2a8
1
Parent(s):
db27ac4
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
---
|
4 |
+
|
5 |
+
## Model details
|
6 |
+
|
7 |
+
**Motivation**
|
8 |
+
This models contains the fine-tuned weights from `liuhaotian/llava-v1.5-7b` so LLM benchmarking can be done.
|
9 |
+
|
10 |
+
**Model type:**
|
11 |
+
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
|
12 |
+
It is an auto-regressive language model, based on the transformer architecture.
|
13 |
+
|
14 |
+
## License
|
15 |
+
Llama 2 is licensed under the LLAMA 2 Community License,
|
16 |
+
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
|
17 |
+
|
18 |
+
## Training dataset
|
19 |
+
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
|
20 |
+
- 158K GPT-generated multimodal instruction-following data.
|
21 |
+
- 450K academic-task-oriented VQA data mixture.
|
22 |
+
- 40K ShareGPT data.
|
23 |
+
|