RaushanTurganbay HF staff commited on
Commit
2867bf4
1 Parent(s): 43b3eca

update processor kwargs

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -45,7 +45,7 @@ https://llava-vl.github.io/
45
 
46
  ## How to use the model
47
 
48
- First, make sure to have `transformers` installed from source or `transformers >= 4.45.0`.
49
  The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applyong chat template:
50
 
51
  ### Using `pipeline`:
@@ -117,7 +117,7 @@ prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
117
 
118
  image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
119
  raw_image = Image.open(requests.get(image_file, stream=True).raw)
120
- inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
121
 
122
  output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
123
  print(processor.decode(output[0][2:], skip_special_tokens=True))
 
45
 
46
  ## How to use the model
47
 
48
+ First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
49
  The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applyong chat template:
50
 
51
  ### Using `pipeline`:
 
117
 
118
  image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
119
  raw_image = Image.open(requests.get(image_file, stream=True).raw)
120
+ inputs = processor(images=raw_image, text=prompt, return_tensors='pt').to(0, torch.float16)
121
 
122
  output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
123
  print(processor.decode(output[0][2:], skip_special_tokens=True))