File size: 2,718 Bytes
baf6b33 4514bcd baf6b33 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: image-text-to-text
---
## cite this model
```bash
@misc {yuanz_2024,
author = { {yuanz} },
title = { llava_qwen15-4b-chat_openai-clip-vit-large-patch14-336 (Revision 5070a27) },
year = 2024,
url = { https://huggingface.co/yuanzhoulvpi/llava_qwen15-4b-chat_openai-clip-vit-large-patch14-336 },
doi = { 10.57967/hf/3146 },
publisher = { Hugging Face }
}
```
# 从0到1训练一个定制版的llava模型
1. 基于openai/clip-vit-large-patch14-336 和Qwen1.5-4B-Chat模型,构建一个llava模型
2. 使用数据liuhaotian/LLaVA-CC3M-Pretrain-595K
3. 训练方式是deepspeed-zero2、lora进行微调。
# 关联的github
1. [https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/train_llava](https://github.com/yuanzhoulvpi2017/zero_nlp/tree/main/train_llava)
# 关联的b站学习视频
1. 待填充
# 推理代码
```python
from transformers import LlavaForConditionalGeneration, AutoProcessor
import torch
from PIL import Image
```
```python
raw_model_name_or_path = "yuanzhoulvpi/llava_qwen15-4b-chat_openai-clip-vit-large-patch14-336"
model = LlavaForConditionalGeneration.from_pretrained(raw_model_name_or_path,device_map="cuda:0", torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(raw_model_name_or_path)
model.eval()
print('ok')
```
```python
testdata = (
'<image>\nRelay a brief, clear account of the picture shown.', # 提问
'large kitchen island with an overhang and dining space next to it', # 真实答案
'data/liuhaotian/LLaVA-CC3M-Pretrain-595K/images_dl/GCC_train_001899387.jpg' # 图片路径
)
```
```python
def build_model_input(model, processor, testdata:tuple):
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": testdata[0]},
]
prompt = processor.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
# print(prompt)
# print("*"*20)
image = Image.open(testdata[2])
inputs = processor(text=prompt, images=image, return_tensors="pt")
for tk in inputs.keys():
inputs[tk] = inputs[tk].to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=20)
generate_ids = [
oid[len(iids):] for oid, iids in zip(generate_ids, inputs.input_ids)
]
gen_text = processor.batch_decode(generate_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
return gen_text
```
```python
build_model_input(model, processor, testdata)
# 'the kitchen is a bright yellow with a glass top island and a large window that looks out to the'
```
|