liuhaotian commited on
Commit
bda7e36
1 Parent(s): 57950b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -30,14 +30,14 @@ The model supports multi-image and multi-prompt generation. Meaning that you can
30
 
31
  ### Using `pipeline`:
32
 
33
- Below we used [`"llava-hf/bakLlava-v1-hf"`](https://huggingface.co/llava-hf/bakLlava-v1-hf) checkpoint.
34
 
35
  ```python
36
  from transformers import pipeline
37
  from PIL import Image
38
  import request
39
 
40
- model_id = "llava-hf/bakLlava-v1-hf"
41
  pipe = pipeline("image-to-text", model=model_id)
42
  url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
43
 
@@ -60,7 +60,7 @@ from PIL import Image
60
  import torch
61
  from transformers import AutoProcessor, LlavaForConditionalGeneration
62
 
63
- model_id = "llava-hf/llava-1.5-7b-hf"
64
 
65
  prompt = "<image> \nUSER: What are these?\nASSISTANT:"
66
  image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
 
30
 
31
  ### Using `pipeline`:
32
 
33
+ Below we used [`"llava-hf/llava-1.5-13b-hf"`](https://huggingface.co/llava-hf/llava-1.5-13b-hf) checkpoint.
34
 
35
  ```python
36
  from transformers import pipeline
37
  from PIL import Image
38
  import request
39
 
40
+ model_id = "llava-hf/llava-1.5-13b-hf"
41
  pipe = pipeline("image-to-text", model=model_id)
42
  url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
43
 
 
60
  import torch
61
  from transformers import AutoProcessor, LlavaForConditionalGeneration
62
 
63
+ model_id = "llava-hf/llava-1.5-13b-hf"
64
 
65
  prompt = "<image> \nUSER: What are these?\nASSISTANT:"
66
  image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"