yzhuang commited on
Commit
f325375
1 Parent(s): f6333d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +182 -3
README.md CHANGED
@@ -1,3 +1,182 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - nyu-visionx/Cambrian-10M
5
+ language:
6
+ - en
7
+ base_model:
8
+ - tiiuae/falcon-mamba-7b-instruct
9
+ pipeline_tag: image-text-to-text
10
+ ---
11
+
12
+ # Viper: Open Mamba-based Vision-Language Models
13
+ **Yufan Zhuang<sup>1,2</sup>, Pierce Chuang<sup>2</sup>, Yichao Lu<sup>2</sup>, Abhay Harpale<sup>2</sup>, Vikas Bhardwaj<sup>2</sup>, Jingbo Shang<sup>1</sup>**
14
+
15
+ **<sup>1</sup>UC San Diego**, **<sup>2</sup>Meta**
16
+
17
+ [Viper-Jamba-52B](https://huggingface.co/ViperVLM/Viper-Jamba-52B) || [Viper-Mamba-7B](https://huggingface.co/ViperVLM/Viper-Mamba-7B) || [Evaluation](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard) || [Github](https://github.com/EvanZhuang/viper)
18
+
19
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6438ccbb3b46237de3d052e8/RFArMOH2TMI_G9bZTZr8_.jpeg)
20
+ (Logo Created by ChatGPT-4o)
21
+
22
+
23
+ * Viper VLMs are built on the Mamba architecture, which offers efficiency and strong performance in handling long-range dependencies compared to Transformers.
24
+ * The models process visual tokens from entire images, leveraging Mamba's strengths in linear-time complexity and long-range reasoning for vision tasks, and are trained on the Cambrian-7M dataset, supporting up to 2K resolution.
25
+ * Viper VLMs demonstrate competitive performance on diverse benchmarks, setting the stage for potential future shifts in vision-language model architectures.
26
+
27
+ ## Introduction
28
+
29
+ We introduce *Viper*, a series of open vision language models (VLMs) built on the Mamba architecture.
30
+ Since Mamba's inception, it has been regarded as a promising alternative to the Transformer as the foundational architecture for large language models.
31
+ Mamba offers a significant advantage in terms of linear-time complexities with respect to input sequence length, while also outperforming Transformers in tasks that require long-range dependencies understanding.
32
+
33
+ In Viper VLMs, we imbibe all visual tokens into the model and inference on the entire image, relying on Mamba's efficiency and long-range reasoning power to comprehend the vision inputs.
34
+ The models are trained on the Cambrian-7M, natively supporting up to 2K resolution.
35
+ We show that Viper VLMs are competitive with open-sourced VLMs across diverse benchmarks.
36
+ This work lays the groundwork for potential architectural shifts in future vision-language models, highlighting Mamba's promising role in advancing this field.
37
+
38
+
39
+
40
+ ## Model Architecture
41
+
42
+ We use the single-encoder design with linear projectors connecting the vision encoder and LLM backbones.
43
+
44
+ | Model | Encoder | LLM backbone| Arch | Input Resolution (Training)
45
+ |----------|----------|----------|----------|----------|
46
+ | Viper-Jamba-52B | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [Jamba-1.5-Mini](https://huggingface.co/ai21labs/AI21-Jamba-1.5-Mini) | MoE-Jamba | Up to 1344x1344 pixels |
47
+ | Viper-Mamba-7B | [clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) | [falcon-mamba-7b-instruct](tiiuae/falcon-mamba-7b-instruct) | Dense-Mamba | Up to 2352x2352 pixels|
48
+
49
+ We utilized AnyRes for supporting high-resolution inputs.
50
+
51
+
52
+ ## Evaluation
53
+
54
+
55
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6438ccbb3b46237de3d052e8/qs5uJXAgUUE1qL1XeWghH.png)
56
+
57
+
58
+ ## Usage
59
+
60
+ Environment Configuration
61
+ ```
62
+ git clone https://github.com/EvanZhuang/viper.git
63
+ cd ./viper
64
+ ```
65
+ Create conda environment
66
+ ```
67
+ conda create --name viper python=3.10
68
+ conda activate viper
69
+ pip install -r requirements.txt
70
+ pip install flash-attn --no-build-isolation
71
+ pip install mamba-ssm[causal-conv1d]
72
+ ```
73
+ Dependent on [flash-attn](https://github.com/Dao-AILab/flash-attention), [causal-conv1d](https://github.com/Dao-AILab/causal-conv1d), [mamba-ssm](https://github.com/state-spaces/mamba)
74
+
75
+ Install from here:
76
+ ```
77
+ pip install vipervlm
78
+ ```
79
+ Then you can use the Viper VLMs in the following way:
80
+ ```
81
+ import copy
82
+ import torch
83
+ from viper.model.builder import load_pretrained_model
84
+ from viper.conversation import conv_templates
85
+ from viper.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
86
+
87
+ model_path = "Viper-Mamba-7B"
88
+ model_name = get_model_name_from_path(model_path)
89
+ tokenizer, model, image_processor, _ = load_pretrained_model(model_path, None, model_name, use_flash_attn=True)
90
+ model.eval()
91
+
92
+ conv_mode = 'system_jamba'
93
+ DEFAULT_IMAGE_TOKEN = '<image>'
94
+ IMAGE_TOKEN_INDEX = -200
95
+
96
+ content, images = '', []
97
+ image_sizes = [] # Store image sizes
98
+
99
+ # Process Input in Chat format
100
+ for msg in message:
101
+ if msg['type'] == 'text':
102
+ content += msg['value']
103
+ else:
104
+ img = Image.open(msg['value']).convert('RGB')
105
+ images.append(img)
106
+ image_sizes.append(img.size) # Store the size of each image
107
+ content += (DEFAULT_IMAGE_TOKEN + '\n')
108
+
109
+ # Process images using the class attribute process_images
110
+ image_tensor = process_images(images, image_processor, model.config)[0]
111
+
112
+ conv = copy.deepcopy(conv_templates[conv_template])
113
+ conv.append_message(conv.roles[0], content)
114
+
115
+ prompt_question = conv.get_prompt(add_generation_prompt=True)
116
+
117
+ input_ids = tokenizer_image_token(prompt_question,
118
+ tokenizer,
119
+ IMAGE_TOKEN_INDEX,
120
+ return_tensors='pt')
121
+ input_ids = input_ids.unsqueeze(0).to(device='cuda', non_blocking=True)
122
+ image_tensor = image_tensor.unsqueeze(0).to(dtype=torch.bfloat16, device='cuda', non_blocking=True)
123
+
124
+ # Pass image sizes along with other parameters
125
+ with torch.inference_mode():
126
+ cont = model.generate(
127
+ input_ids,
128
+ images=image_tensor,
129
+ image_sizes=image_sizes,
130
+ do_sample=False,
131
+ max_new_tokens=4096,
132
+ temperature=0,
133
+ pad_token_id=tokenizer.pad_token_id,
134
+ use_cache=True,
135
+ )
136
+ text_outputs = tokenizer.batch_decode(cont, skip_special_tokens=True)[0]
137
+
138
+ ```
139
+
140
+ ## Throughput Analysis
141
+ Viper-Jamba-52B's active parameter size is only 12B.
142
+
143
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6438ccbb3b46237de3d052e8/9WMOvMv24vJTLTFTHTzBW.png)
144
+
145
+ ## Dataset
146
+ We train our models on [Cambrian-7M](https://github.com/cambrian-mllm/cambrian).
147
+ These datasets provide a wide variety of high-quality image-conversation pairs sourced from diverse environments and contexts, enabling robust multi-modal learning.
148
+
149
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6438ccbb3b46237de3d052e8/xgK6Bg8TuFbWzB4BephZn.png)
150
+
151
+ ## Training Recipe
152
+ We employ a progressive three-stage training procedure designed to optimize performance across varying levels of input complexity and resolution.
153
+
154
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6438ccbb3b46237de3d052e8/vQHSIf3PRYab1g8c-owzJ.png)
155
+
156
+ The training process begins with low-resolution inputs, allowing the model to focus on basic structural and semantic relationships without the computational overhead of detailed features.
157
+ In the second stage, we introduce medium-resolution inputs, expanding the model’s capacity to capture more nuanced patterns while gradually increasing sequence length.
158
+ Finally, in the high-resolution stage, the model is trained on longer sequences with a broader range of input variability, enhancing its ability to generalize to diverse, complex visual and linguistic tasks.
159
+ This staged approach ensures a smooth transition from coarse to fine-grained learning, while maintaining models' capabilities.
160
+
161
+ | Traing Config | |
162
+ | -------- | ------- |
163
+ | GPUs | 128 H100-80G |
164
+ | Training time | 14 Days |
165
+ | Training data | Cambrian-7M |
166
+
167
+
168
+ ## Acknowledgment
169
+ This project is built upon the following awesome projects [LLaVA](https://github.com/haotian-liu/LLaVA), [Open-LLaVA-NeXT](https://github.com/xiaoachen98/Open-LLaVA-NeXT).
170
+ We thank AI21 Labs and Technology Innovation Institute for open-sourcing the powerful LLMs.
171
+ We also thank the [Cambrian-1](https://cambrian-mllm.github.io/) project for providing such high-quality vision-language datasets.
172
+
173
+ ## Citation
174
+
175
+ The paper is coming soon. Meanwhile, please use the following to cite:
176
+ ```
177
+ @article{vipervlm,
178
+ title={Viper: Open Mamba-based Vision-Language Models},
179
+ author={Zhuang, Yufan and Chuang, Pierce and Lu, Yichao and Harpale, Abhay and Bhardwaj, Vikas and Shang, Jingbo},
180
+ year={2024}
181
+ }
182
+ ```