variante commited on
Commit
bf14af1
1 Parent(s): 480f0f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -3
README.md CHANGED
@@ -1,3 +1,44 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ pipeline_tag: image-text-to-text
4
+ license: apache-2.0
5
+ datasets:
6
+ - VIMA/VIMA-Data
7
+ tags:
8
+ - llara
9
+ - llava
10
+ - robotics
11
+ - vlm
12
+ ---
13
+ <br>
14
+ <be>
15
+
16
+ # LLaRA Model Card
17
+
18
+ This model is released with paper **LLaRA: Supercharging Robot Learning Data for Vision-Language Policy**
19
+
20
+ [Xiang Li](https://xxli.me)<sup>1</sup>, [Cristina Mata](https://openreview.net/profile?id=~Cristina_Mata1)<sup>1</sup>, [Jongwoo Park](https://github.com/jongwoopark7978)<sup>1</sup>, [Kumara Kahatapitiya](https://www3.cs.stonybrook.edu/~kkahatapitiy)<sup>1</sup>, [Yoo Sung Jang](https://yjang43.github.io/)<sup>1</sup>, [Jinghuan Shang](https://elicassion.github.io/)<sup>1</sup>, [Kanchana Ranasinghe](https://kahnchana.github.io/)<sup>1</sup>, [Ryan Burgert](https://ryanndagreat.github.io/)<sup>1</sup>, [Mu Cai](https://pages.cs.wisc.edu/~mucai/)<sup>2</sup>, [Yong Jae Lee](https://pages.cs.wisc.edu/~yongjaelee/)<sup>2</sup>, and [Michael S. Ryoo](http://michaelryoo.com/)<sup>1</sup>
21
+
22
+ <sup>1</sup>Stony Brook University <sup>2</sup>University of Wisconsin-Madison
23
+
24
+ ## Model details
25
+
26
+ **Model type:**
27
+ LLaRA is an open-source visuomotor policy trained by fine-tuning [LLaVA-7b-v1.5](https://huggingface.co/liuhaotian/llava-v1.5-7b) on instruction-following data `D-inBC` and 6 auxiliary datasets, converted from [VIMA-Data](https://huggingface.co/datasets/VIMA/VIMA-Data).
28
+ For the conversion code, please refer to [convert_vima.ipynb](https://github.com/LostXine/LLaRA/blob/main/datasets/convert_vima.ipynb)
29
+
30
+ **Model date:**
31
+ llava-1.5-7b-llara-D-inBC-Aux-D-VIMA-80k was trained in June 2024.
32
+
33
+ **Paper or resources for more information:**
34
+ https://github.com/LostXine/LLaRA
35
+
36
+ **Where to send questions or comments about the model:**
37
+ https://github.com/LostXine/LLaRA/issues
38
+
39
+ ## Intended use
40
+ **Primary intended uses:**
41
+ The primary use of LLaRA is research on large multimodal models and chatbots.
42
+
43
+ **Primary intended users:**
44
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.