Jingya HF staff commited on
Commit
385e58b
1 Parent(s): dc87e1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -3
README.md CHANGED
@@ -31,20 +31,34 @@ You can use the raw model for semantic segmentation. See the [model hub](https:/
31
  Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
32
 
33
  ```python
34
- from optimum.onnxruntime import ORTModelForSemanticSegmentation
35
  from PIL import Image
36
  import requests
37
 
38
- feature_extractor = SegformerFeatureExtractor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
 
 
39
  model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
40
 
41
  url = "http://images.cocodataset.org/val2017/000000039769.jpg"
42
  image = Image.open(requests.get(url, stream=True).raw)
43
 
44
- inputs = feature_extractor(images=image, return_tensors="pt")
45
  outputs = model(**inputs)
46
  logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
47
  ```
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  For more code examples, we refer to the [Optimum documentation](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models).
50
 
 
31
  Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
32
 
33
  ```python
34
+ from transformers import SegformerImageProcessor
35
  from PIL import Image
36
  import requests
37
 
38
+ from optimum.onnxruntime import ORTModelForSemanticSegmentation
39
+
40
+ image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
41
  model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
42
 
43
  url = "http://images.cocodataset.org/val2017/000000039769.jpg"
44
  image = Image.open(requests.get(url, stream=True).raw)
45
 
46
+ inputs = image_processor(images=image, return_tensors="pt").to(device)
47
  outputs = model(**inputs)
48
  logits = outputs.logits # shape (batch_size, num_labels, height/4, width/4)
49
  ```
50
+ If you use pipeline:
51
+
52
+ ```python
53
+ from transformers import SegformerImageProcessor, pipeline
54
+ from optimum.onnxruntime import ORTModelForSemanticSegmentation
55
+
56
+ image_processor = SegformerImageProcessor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
57
+ model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
58
+
59
+ pipe = pipeline("image-segmentation", model=model, feature_extractor=image_processor)
60
+ pred = pipe(url)
61
+ ```
62
 
63
  For more code examples, we refer to the [Optimum documentation](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/models).
64