rjadr commited on
Commit
4467e20
1 Parent(s): 9deab26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -1,3 +1,39 @@
1
  ---
2
  license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ inference: false
4
  ---
5
+ # LLaVA Model Card
6
+
7
+ ## Model details
8
+
9
+ **Model type:**
10
+ LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
11
+ It is an auto-regressive language model, based on the transformer architecture.
12
+
13
+ **Model date:**
14
+ LLaVA was trained in May 2023.
15
+
16
+ **Paper or resources for more information:**
17
+ https://llava-vl.github.io/
18
+
19
+ **License:**
20
+ Apache License 2.0
21
+
22
+ **Where to send questions or comments about the model:**
23
+ https://github.com/haotian-liu/LLaVA/issues
24
+
25
+ ## Intended use
26
+ **Primary intended uses:**
27
+ The primary use of LLaVA is research on large multimodal models and chatbots.
28
+
29
+ **Primary intended users:**
30
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
31
+
32
+ ## Training dataset
33
+ 595K filtered image-text pairs from CC3M.
34
+ 150K GPT-generated multimodal instruction-following data.
35
+
36
+ ## Evaluation dataset
37
+ A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs.
38
+ We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset.
39
+ See https://llava-vl.github.io/ for more details.