AWS Logo
Menu
Detecting Odd & Even Car Plate Numbers Using YOLOv11 on G5.xlarge

Detecting Odd & Even Car Plate Numbers Using YOLOv11 on G5.xlarge

This personal project leverages YOLOv11, an advanced object detection model, to automatically identify and classify vehicle license plates as odd or even based on the last digit. Using a RoboFlow dataset of annotated car plates, the model was trained on an AWS G5.xlarge GPU instance for optimized performance.

Published Apr 14, 2025

Introduction

Automatic number plate recognition (ANPR) systems have become essential in traffic management, law enforcement, and smart city applications. One interesting use case is detecting odd and even-numbered plates to regulate traffic flow during peak hours or pollution control measures. In this proof of concept (PoC), we explore how YOLOv11—a state-of-the-art object detection model—can be leveraged to classify car plates as odd or even. I used a RoboFlow dataset (Deteksi Plat Nomor Mobil) for training and deployed the model on an AWS G5.xlarge instance for efficient inference.

Background of ANPR and Odd-Even Classification

Number plate detection involves localizing and recognizing characters on vehicle plates. Extending this to odd-even classification requires an additional step: analyzing the last digit of the recognized number. Governments in cities like Jakarta have implemented odd-even schemes to reduce congestion by allowing only odd or even-numbered vehicles on specific days. Traditional ANPR systems rely on OCR (Optical Character Recognition), but deep learning-based approaches like YOLOv11 provide faster and more accurate detection.

Understanding YOLOv11 Architecture

YOLOv11 is an evolution of the YOLO (You Only Look Once) family, designed for real-time object detection. Unlike its predecessors, YOLOv11 introduces enhanced feature extraction with a deeper backbone, improved anchor box mechanisms, and optimized loss functions. The model processes images in a single forward pass, dividing them into a grid and predicting bounding boxes, class probabilities, and abjectness scores simultaneously. This makes YOLOv11 highly efficient for tasks like plate detection, where speed and accuracy are crucial.

Dataset Preparation and Annotation

The RoboFlow dataset (Deteksi Plat Nomor Mobil) contains annotated car plate images suitable for training object detection models. The annotations were in YOLO format, providing bounding box coordinates around each plate. The original dataset classified plates as odd or even, 2 classes.

Training YOLOv11 on G5.xlarge

I utilized an AWS G5.xlarge instance equipped with an NVIDIA A10G GPU to accelerate training. The YOLOv11 model was fine-tuned using the annotated dataset with a batch size of 16 and an initial learning rate of 0.001. Training metrics, including mean Average Precision (mAP) and loss curves, were monitored. After 50 epochs, the model achieved an mAP 0.88, indicating strong detection performance.
Training Result

Implementation Using PyTorch

To further validate the approach, I implemented the odd-even plate detection system using PyTorch, leveraging the flexibility of deep learning frameworks for customization. The complete code and experimentation details are available in my GitHub repository: Detecting Odd and Even Plate Number.
This implementation included a YOLOv11-based detector for localizing license plates. The model was trained on the same RoboFlow dataset.

Results and Performance Analysis

The PyTorch implementation achieved a detection accuracy of 78% on the test set.
Confusion Matrix
The attached image, confusion_matrix, appears to depict a confusion matrix for the odd-even license plate classification model, though the details provided are fragmented. Genap/Even shows 75% accuracy compare to Ganjil/Odd shows 78% accuracy.
Result Even Plate
Result Odd Plate

Conclusion and Future Work

This PoC demonstrates the feasibility of using YOLOv11 for odd-even plate detection, with potential applications in smart traffic systems. Future improvements could include end-to-end digit recognition without separate OCR and deploying the model on edge devices for real-time roadside processing. The integration of additional sensors (e.g., infrared cameras) could further enhance robustness in varying lighting conditions.

References

Comments