Spaces:
Runtime error
Runtime error
File size: 5,631 Bytes
2366e36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
# Model Serving
`MMOCR` provides some utilities that facilitate the model serving process.
Here is a quick walkthrough of necessary steps that let the models to serve through an API.
## Install TorchServe
You can follow the steps on the [official website](https://github.com/pytorch/serve#install-torchserve-and-torch-model-archiver) to install `TorchServe` and
`torch-model-archiver`.
## Convert model from MMOCR to TorchServe
We provide a handy tool to convert any `.pth` model into `.mar` model
for TorchServe.
```shell
python tools/deployment/mmocr2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
```
:::{note}
${MODEL_STORE} needs to be an absolute path to a folder.
:::
For example:
```shell
python tools/deployment/mmocr2torchserve.py \
configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py \
checkpoints/dbnet_r18_fpnc_1200e_icdar2015.pth \
--output-folder ./checkpoints \
--model-name dbnet
```
## Start Serving
### From your Local Machine
Getting your models prepared, the next step is to start the service with a one-line command:
```bash
# To load all the models in ./checkpoints
torchserve --start --model-store ./checkpoints --models all
# Or, if you only want one model to serve, say dbnet
torchserve --start --model-store ./checkpoints --models dbnet=dbnet.mar
```
Then you can access inference, management and metrics services
through TorchServe's REST API.
You can find their usages in [TorchServe REST API](https://github.com/pytorch/serve/blob/master/docs/rest_api.md).
| Service | Address |
| ------------------- | ----------------------- |
| Inference | `http://127.0.0.1:8080` |
| Management | `http://127.0.0.1:8081` |
| Metrics | `http://127.0.0.1:8082` |
:::{note}
By default, TorchServe binds port number `8080`, `8081` and `8082` to its services.
You can change such behavior by modifying and saving the contents below to `config.properties`, and running TorchServe with option `--ts-config config.preperties`.
```bash
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
number_of_netty_threads=32
job_queue_size=1000
model_store=/home/model-server/model-store
```
:::
### From Docker
A better alternative to serve your models is through Docker. We provide a Dockerfile
that frees you from those tedious and error-prone environmental setup steps.
#### Build `mmocr-serve` Docker image
```shell
docker build -t mmocr-serve:latest docker/serve/
```
#### Run `mmocr-serve` with Docker
In order to run Docker in GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html); or you can omit the `--gpus` argument for a CPU-only session.
The command below will run `mmocr-serve` with a gpu, bind the ports of `8080` (inference),
`8081` (management) and `8082` (metrics) from container to `127.0.0.1`, and mount
the checkpoint folder `./checkpoints` from the host machine to `/home/model-server/model-store`
of the container. For more information, please check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment).
```shell
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=`realpath ./checkpoints`,target=/home/model-server/model-store \
mmocr-serve:latest
```
:::{note}
`realpath ./checkpoints` points to the absolute path of "./checkpoints", and you can replace it with the absolute path where you store torchserve models.
:::
Upon running the docker, you can access inference, management and metrics services
through TorchServe's REST API.
You can find their usages in [TorchServe REST API](https://github.com/pytorch/serve/blob/master/docs/rest_api.md).
| Service | Address |
| ------------------- | ----------------------- |
| Inference | `http://127.0.0.1:8080` |
| Management | `http://127.0.0.1:8081` |
| Metrics | `http://127.0.0.1:8082` |
## 4. Test deployment
Inference API allows user to post an image to a model and returns the prediction result.
```shell
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T demo/demo_text_det.jpg
```
For example,
```shell
curl http://127.0.0.1:8080/predictions/dbnet -T demo/demo_text_det.jpg
```
For detection models, you should obtain a json with an object named `boundary_result`. Each array inside has float numbers representing x, y
coordinates of boundary vertices in clockwise order, and the last float number as the
confidence score.
```json
{
"boundary_result": [
[
221.18990004062653,
226.875,
221.18990004062653,
212.625,
244.05868631601334,
212.625,
244.05868631601334,
226.875,
0.80883354575186
]
]
}
```
For recognition models, the response should look like:
```json
{
"text": "sier",
"score": 0.5247521847486496
}
```
And you can use `test_torchserve.py` to compare result of TorchServe and PyTorch by visualizing them.
```shell
python tools/deployment/test_torchserve.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}]
```
Example:
```shell
python tools/deployment/test_torchserve.py \
demo/demo_text_det.jpg \
configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py \
checkpoints/dbnet_r18_fpnc_1200e_icdar2015.pth \
dbnet
```
|