Projecs Based on MMRazor
There are many research works and pre-trained models built on MMRazor. We list some of them as examples of how to use MMRazor slimmable models for downstream frameworks. As the page might not be completed, please feel free to contribute more efficient mmrazor-models to update this page.
Description
This is an implementation of MMRazor Searchable Backbone Application, we provide detection configs and models for MMRazor in MMYOLO.
Backbone support
Here are the Neural Architecture Search(NAS) Models that come from MMRazor which support YOLO Series. If you are looking for MMRazor models only for Backbone, you could refer to MMRazor ModelZoo and corresponding repository.
Usage
Prerequisites
- MMRazor v1.0.0rc2 or higher (dev-1.x)
Install MMRazor using MIM.
mim install mmengine
mim install "mmrazor>=1.0.0rc2"
Install MMRazor from source
git clone -b dev-1.x https://github.com/open-mmlab/mmrazor.git
cd mmrazor
# Install MMRazor
mim install -v -e .
Training commands
In MMYOLO's root directory, if you want to use single GPU for training, run the following command to train the model:
CUDA_VISIBLE_DEVICES=0 PORT=29500 ./tools/dist_train.sh configs/razor/subnets/yolov5_s_spos_shufflenetv2_syncbn_8xb16-300e_coco.py
If you want to use several of these GPUs to train in parallel, you can use the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 PORT=29500 ./tools/dist_train.sh configs/razor/subnets/yolov5_s_spos_shufflenetv2_syncbn_8xb16-300e_coco.py
Testing commands
In MMYOLO's root directory, run the following command to test the model:
CUDA_VISIBLE_DEVICES=0 PORT=29500 ./tools/dist_test.sh configs/razor/subnets/yolov5_s_spos_shufflenetv2_syncbn_8xb16-300e_coco.py ${CHECKPOINT_PATH}
Results and Models
Here we provide the baseline version of YOLO Series with NAS backbone.
Model | size | box AP | Params(M) | FLOPs(G) | Config | Download |
---|---|---|---|---|---|---|
yolov5-s | 640 | 37.7 | 7.235 | 8.265 | config | model | log |
yolov5_s_spos_shufflenetv2 | 640 | 38.0 | 7.04(-2.7%) | 7.03 | config | model | log |
yolov6-s | 640 | 44.0 | 18.869 | 24.253 | config | model | log |
yolov6_l_attentivenas_a6 | 640 | 45.3 | 18.38(-2.6%) | 8.49 | config | model | log |
RTMDet-tiny | 640 | 41.0 | 4.8 | 8.1 | config | model | log |
rtmdet_tiny_ofa_lat31 | 960 | 41.3 | 3.91(-18.5%) | 6.09 | config | model | log |
Note:
- For fair comparison, the training configuration is consistent with the original configuration and results in an improvement of about 0.2-0.5% AP.
yolov5_s_spos_shufflenetv2
achieves 38.0% AP with only 7.042M parameters, directly instead of the backbone, and outperformsyolov5_s
with a similar size by more than 0.3% AP.- With the efficient backbone of
yolov6_l_attentivenas_a6
, the input channels ofYOLOv6RepPAFPN
are reduced. Meanwhile, modify the deepen_factor and the neck is made deeper to restore the AP. - with the
rtmdet_tiny_ofa_lat31
backbone with only 3.315M parameters and 3.634G flops, we can modify the input resolution to 960, with a similar model size compared tortmdet_tiny
and exceedsrtmdet_tiny
by 0.4% AP, reducing the size of the whole model to 3.91 MB.