shalchianmh's picture
Update README.md
056e81d verified
metadata
license: agpl-3.0
pipeline_tag: object-detection
tags:
  - ultralytics
  - tracking
  - instance-segmentation
  - image-classification
  - pose-estimation
  - obb
  - object-detection
  - yolo
  - yolov8
  - license_plate
  - Iran
  - veichle_lisence_plate

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

I fine tuned this model on this dataset for detecting Iranian veichle license plate.

Documentation

See below for a quickstart installation and usage example, and see the YOLOv8 Docs for full documentation on training, validation, prediction and deployment.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

PyPI - Version Downloads PyPI - Python Version

pip install ultralytics

For alternative installation methods including Conda, Docker, and Git, please refer to the Quickstart Guide.

Conda Version Docker Image Version

Usage

CLI

YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command:

yolo predict model=YOLOv8m_Iran_license_plate_detection.pt source='your_image.jpg'

yolo can be used for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See the YOLOv8 CLI Docs for examples.

Python

YOLOv8 may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:

from ultralytics import YOLO
# Load a model
model = YOLO("local_model_path/YOLOv8m_Iran_license_plate_detection.pt")
# Train the model
train_results = model.train(
    data="Iran_license_plate.yaml",  # path to dataset YAML
    epochs=100,  # number of training epochs
    imgsz=640,  # training image size
    device="cpu",  # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu
)
# Evaluate model performance on the validation set
metrics = model.val()
# Perform object detection on an image
results = model("path/to/image.jpg")
results[0].show()
# Export the model to ONNX format
path = model.export(format="onnx")  # return path to exported model

Inference

You can use the model with this code to see how it detects, the style you plot or save detected object is up to you, but here is an example:

from ultralytics import YOLO
import matplotlib.pyplot as plt
import cv2

# Load the YOLO model
model = YOLO("path/to/local/model.pt")

# Define the input image file path
file_path = "path/to/image"

# Get the prediction results
results = model([file_path])

# Read the input image
img = cv2.imread(file_path)

# Iterate over the results to extract bounding box and display both input and cropped output
for result in results:
    maxa = result.boxes.conf.argmax()  # Get the index of the highest confidence box
    x, y, w, h = result.boxes.xywh[maxa]  # Extract coordinates and size
    print(f"Bounding box: x={x}, y={y}, w={w}, h={h}")
    
    # Crop the detected object from the image
    crop_img = img[int(y-h/2):int(y+h/2), int(x-w/2):int(x+w/2)]

    # Convert the image from BGR to RGB for display with matplotlib
    img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    crop_img_rgb = cv2.cvtColor(crop_img, cv2.COLOR_BGR2RGB)

    # Plot the input image and cropped image side by side
    plt.figure(figsize=(10, 5))

    # Display the input image
    plt.subplot(1, 2, 1)
    plt.imshow(img_rgb)
    plt.title("Input Image")
    plt.axis("off")

    # Display the cropped image (output)
    plt.subplot(1, 2, 2)
    plt.imshow(crop_img_rgb)
    plt.title("Cropped Output")
    plt.axis("off")

    plt.show()

desired output: Bounding box: x=246.37399291992188, y=254.00021362304688, w=146.7321014404297, h=38.26557922363281 image/png

See YOLOv8 Python Docs for more examples.