wangfangyuan commited on
Commit
14ea9e7
β€’
1 Parent(s): 550bbd9

update for regression test

Browse files
Files changed (1) hide show
  1. README.md +21 -20
README.md CHANGED
@@ -43,23 +43,24 @@ You can use the raw model for object detection. See the [model hub](https://hugg
43
 
44
  The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.
45
 
46
- Download COCO dataset and create directories in your code like this:
47
  ```plain
48
- └── datasets
49
- └── coco
50
- β”œβ”€β”€ annotations
51
- | β”œβ”€β”€ instances_val2017.json
52
- | └── ...
53
- β”œβ”€β”€ labels
54
- | β”œβ”€β”€ val2017
55
- | | β”œβ”€β”€ 000000000139.txt
56
- | β”œβ”€β”€ 000000000285.txt
57
- | └── ...
58
- β”œβ”€β”€ images
59
- | β”œβ”€β”€ val2017
60
- | | β”œβ”€β”€ 000000000139.jpg
61
- | β”œβ”€β”€ 000000000285.jpg
62
- └── val2017.txt
 
63
  ```
64
  1. put the val2017 image folder under images directory or use a softlink
65
  2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
@@ -87,8 +88,8 @@ for batch in dataset:
87
  im = preprocess(im)
88
  if len(im.shape) == 3:
89
  im = im[None]
90
- outputs = onnx_model.run(None, {onnx_model.get_inputs()[0].name: im.cpu().numpy()})
91
- outputs = [torch.tensor(item) for item in outputs]
92
  preds = post_process(outputs)
93
  preds = non_max_suppression(
94
  preds, 0.25, 0.7, agnostic=False, max_det=300, classes=None
@@ -105,12 +106,12 @@ for batch in dataset:
105
 
106
  - Run inference for a single image
107
  ```python
108
- python onnx_inference.py -m ./yolov8m_qat.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config
109
  ```
110
  *Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
111
  - Test accuracy of the quantized model
112
  ```python
113
- python onnx_eval.py -m ./yolov8m_qat.onnx --ipu --provider_config /Path/To/Your/Provider_config
114
  ```
115
 
116
  ### Performance
 
43
 
44
  The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.
45
 
46
+ Download COCO dataset and create/mount directories in your code like this:
47
  ```plain
48
+ └── yolov8m
49
+ └── datasets
50
+ └── coco
51
+ β”œβ”€β”€ annotations
52
+ | β”œβ”€β”€ instances_val2017.json
53
+ | └── ...
54
+ β”œβ”€β”€ labels
55
+ | β”œβ”€β”€ val2017
56
+ | | β”œβ”€β”€ 000000000139.txt
57
+ | β”œβ”€β”€ 000000000285.txt
58
+ | └── ...
59
+ β”œβ”€β”€ images
60
+ | β”œβ”€β”€ val2017
61
+ | | β”œβ”€β”€ 000000000139.jpg
62
+ | β”œβ”€β”€ 000000000285.jpg
63
+ └── val2017.txt
64
  ```
65
  1. put the val2017 image folder under images directory or use a softlink
66
  2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
 
88
  im = preprocess(im)
89
  if len(im.shape) == 3:
90
  im = im[None]
91
+ outputs = onnx_model.run(None, {onnx_model.get_inputs()[0].name: im.permute(0, 2, 3, 1).cpu().numpy()})
92
+ outputs = [torch.tensor(item).permute(0, 3, 1, 2) for item in outputs]
93
  preds = post_process(outputs)
94
  preds = non_max_suppression(
95
  preds, 0.25, 0.7, agnostic=False, max_det=300, classes=None
 
106
 
107
  - Run inference for a single image
108
  ```python
109
+ python onnx_inference.py -m ./yolov8m.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config
110
  ```
111
  *Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
112
  - Test accuracy of the quantized model
113
  ```python
114
+ python onnx_eval.py -m ./yolov8m.onnx --ipu --provider_config /Path/To/Your/Provider_config
115
  ```
116
 
117
  ### Performance