File size: 1,373 Bytes
b42df5a bbc91c0 b42df5a bbc91c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
tags:
- ultralyticsplus
- yolov5
- ultralytics
- yolo
- vision
- object-detection
- pytorch
- awesome-yolov8-models
- indonesia
- layout detector
model-index:
- name: hermanshid/yolo-layout-detector
results:
- task:
type: object-detection
metrics:
- type: precision # since mAP@0.5 is not available on hf.co/metrics
value: 0.979 # min: 0.0 - max: 1.0
name: mAP@0.5(box)
inference: false
---
# YOLOv5 for Layout Detection
## Dataset
Dataset available in [kaggle](https://www.kaggle.com/datasets/hermansugiharto/book-layout)
## Supported Labels
```python
["caption", "chart", "image", "image_caption", "table", "table_caption", "text", "title"]
```
## How to use
- Install library
`pip install yolov5==7.0.5 torch`
## Load model and perform prediction
```python
import yolov5
from PIL import Image
model = yolov5.load(models_id)
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://huggingface.co/spaces/hermanshid/yolo-layout-detector-space/raw/main/test_images/example1.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|