RaushanTurganbay HF staff commited on
Commit
7197296
1 Parent(s): ff2434e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -5
README.md CHANGED
@@ -1,5 +1,116 @@
1
- ---
2
- license: other
3
- license_name: tongyi-qianwen-research
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen-research
4
+ license_link: LICENSE
5
+ pipeline_tag: image-text-to-text
6
+ language:
7
+ - en
8
+ ---
9
+
10
+ # LLaVA Interleave Model Card
11
+
12
+ ## Model Details
13
+
14
+
15
+ **Model type:**
16
+ LLaVA Interleave is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture.
17
+ Base LLM: [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat)
18
+
19
+ **Paper or resources for more information:**
20
+ https://llava-vl.github.io/
21
+
22
+ **Primary intended uses:**
23
+ The primary use of LLaVA-Next Interleave is research on large multimodal models and chatbots. This is only for research exploration, and prohibited for commercial usage.
24
+
25
+ **Primary intended users:**
26
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
27
+
28
+
29
+ ## How to use the model
30
+
31
+ First, make sure to have `transformers >= 4.35.3`.
32
+ The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` to the location where you want to query images:
33
+
34
+ ### Using `pipeline`:
35
+
36
+ Below we used [`"llava-hf/llava-interleave-qwen-0.5b-hf"`](https://huggingface.co/llava-hf/llava-interleave-qwen-0.5b-hf) checkpoint.
37
+
38
+ ```python
39
+ from transformers import pipeline
40
+ from PIL import Image
41
+ import requests
42
+
43
+ model_id = "llava-hf/llava-interleave-qwen-7b-dpo-hf"
44
+ pipe = pipeline("image-to-text", model=model_id)
45
+ url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
46
+
47
+ image = Image.open(requests.get(url, stream=True).raw)
48
+ prompt = "<|im_start|>user <image>\nWhat does the label 15 represent? (1) lava (2) core (3) tunnel (4) ash cloud<|im_end|><|im_start|>assistant"
49
+
50
+ outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
51
+ print(outputs)
52
+ ```
53
+
54
+ ### Using pure `transformers`:
55
+
56
+ Below is an example script to run generation in `float16` precision on a GPU device:
57
+
58
+ ```python
59
+ import requests
60
+ from PIL import Image
61
+
62
+ import torch
63
+ from transformers import AutoProcessor, LlavaForConditionalGeneration
64
+
65
+ model_id = "llava-hf/llava-interleave-qwen-7b-dpo-hf"
66
+
67
+ prompt = "<|im_start|>user <image>\nWhat are these?|im_end|><|im_start|>assistant"
68
+ image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
69
+
70
+ model = LlavaForConditionalGeneration.from_pretrained(
71
+ model_id,
72
+ torch_dtype=torch.float16,
73
+ low_cpu_mem_usage=True,
74
+ ).to(0)
75
+
76
+ processor = AutoProcessor.from_pretrained(model_id)
77
+
78
+
79
+ raw_image = Image.open(requests.get(image_file, stream=True).raw)
80
+ inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
81
+
82
+ output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
83
+ print(processor.decode(output[0][2:], skip_special_tokens=True))
84
+ ```
85
+
86
+ ### Model optimization
87
+
88
+ #### 4-bit quantization through `bitsandbytes` library
89
+
90
+ First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with:
91
+
92
+ ```diff
93
+ model = LlavaForConditionalGeneration.from_pretrained(
94
+ model_id,
95
+ torch_dtype=torch.float16,
96
+ low_cpu_mem_usage=True,
97
+ + load_in_4bit=True
98
+ )
99
+ ```
100
+
101
+ #### Use Flash-Attention 2 to further speed-up generation
102
+
103
+ First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with:
104
+
105
+ ```diff
106
+ model = LlavaForConditionalGeneration.from_pretrained(
107
+ model_id,
108
+ torch_dtype=torch.float16,
109
+ low_cpu_mem_usage=True,
110
+ + use_flash_attention_2=True
111
+ ).to(0)
112
+ ```
113
+
114
+ ### License Notices
115
+
116
+ This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset [Tongyi Qianwen LICENSE AGREEMENT](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) and [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/)). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.