File size: 4,563 Bytes
e6c79f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f5d30b
e6c79f4
2f5d30b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6c79f4
 
 
 
 
 
 
 
 
 
 
 
 
e0810ef
e6c79f4
 
 
 
 
 
 
 
 
 
 
 
 
2f5d30b
 
e6c79f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e0810ef
e6c79f4
 
 
 
e0810ef
e6c79f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: apache-2.0
tags:
- RyzenAI
- object-detection
- vision
- YOLO
- Pytorch
datasets:
- COCO
metrics:
- mAP
---
# YOLOv8m model trained on COCO

YOLOv8m is the medium version of YOLOv8 model trained on COCO object detection (118k annotated images) at resolution 640x640. It was released in [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics).

We develop a modified version that could be supported by [AMD Ryzen AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html).


## Model description

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.


## Intended uses & limitations

You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=amd/yolov8) to look for all available YOLOv8 models.


## How to use

### Installation

   Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
   Run the following script to install pre-requisites for this model.
   ```bash
   pip install -r requirements.txt 
   ```


### Data Preparation (optional: for accuracy evaluation)

The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.

Download COCO dataset and create/mount directories in your code like this:
  ```plain
  └── yolov8m
      └── datasets
          └── coco
                β”œβ”€β”€ annotations
                |   β”œβ”€β”€ instances_val2017.json
                |   └── ...
                β”œβ”€β”€ labels
                |   β”œβ”€β”€ val2017
                |   |   β”œβ”€β”€ 000000000139.txt
                |       β”œβ”€β”€ 000000000285.txt
                |       └── ...
                β”œβ”€β”€ images
                |   β”œβ”€β”€ val2017
                |   |   β”œβ”€β”€ 000000000139.jpg
                |       β”œβ”€β”€ 000000000285.jpg
                └── val2017.txt
  ```
1. put the val2017 image folder under images directory or use a softlink
2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
3. modify the coco.yaml like this:
```markdown
path: /path/to/your/datasets/coco  # dataset root dir
train: train2017.txt  # train images (relative to 'path') 118287 images
val: val2017.txt  # val images (relative to 'path') 5000 images
```


### Test & Evaluation

 - Code snippet from [`infer_onnx.py`](./infer_onnx.py) on how to use
```python
args = make_parser().parse_args()
source = args.image_path 
dataset = LoadImages(
    source, imgsz=imgsz, stride=32, auto=False, transforms=None, vid_stride=1
)
onnx_weight = args.model
onnx_model = onnxruntime.InferenceSession(onnx_weight)
for batch in dataset:
    path, im, im0s, vid_cap, s = batch
    im = preprocess(im)
    if len(im.shape) == 3:
        im = im[None]
    outputs = onnx_model.run(None, {onnx_model.get_inputs()[0].name: im.permute(0, 2, 3, 1).cpu().numpy()})
    outputs = [torch.tensor(item).permute(0, 3, 1, 2) for item in outputs]
    preds = post_process(outputs)
    preds = non_max_suppression(
        preds, 0.25, 0.7, agnostic=False, max_det=300, classes=None
    )
    plot_images(
        im,
        *output_to_target(preds, max_det=15),
        source,
        fname=args.output_path,
        names=names,
    )

```

 - Run inference for a single image
  ```python
  python infer_onnx.py --onnx_model ./yolov8m.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config
  ```
*Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
 - Test accuracy of the quantized model
  ```python
  python eval_onnx.py --onnx_model ./yolov8m.onnx --ipu --provider_config /Path/To/Your/Provider_config
  ```

### Performance

|Metric |Accuracy on IPU|
| :----:  | :----: |
|AP\@0.50:0.95|0.486|


```bibtex
@software{yolov8_ultralytics,
  author = {Glenn Jocher and Ayush Chaurasia and Jing Qiu},
  title = {Ultralytics YOLOv8},
  version = {8.0.0},
  year = {2023},
  url = {https://github.com/ultralytics/ultralytics},
  orcid = {0000-0001-5950-6979, 0000-0002-7603-6750, 0000-0003-3783-7069},
  license = {AGPL-3.0}
}
```