--- comments: true --- # Object Detection Module Development Tutorial ## I. Overview The object detection module is a crucial component in computer vision systems, responsible for locating and marking regions containing specific objects in images or videos. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. The object detection module typically outputs bounding boxes for the target regions, which are then passed as input to the object recognition module for further processing. ## II. List of Supported Models
| Model | Model Download Link | mAP(%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Storage Size (M) | Description |
|---|---|---|---|---|---|---|
| PicoDet-L | Inference Model/Trained Model | 42.6 | 16.6715 | 169.904 | 20.9 M | PP-PicoDet is a lightweight object detection algorithm for full-size, wide-angle targets, considering the computational capacity of mobile devices. Compared to traditional object detection algorithms, PP-PicoDet has a smaller model size and lower computational complexity, achieving higher speed and lower latency while maintaining detection accuracy. |
| PicoDet-S | Inference Model/Trained Model | 29.1 | 14.097 | 37.6563 | 4.4 M | |
| PP-YOLOE_plus-L | Inference Model/Trained Model | 52.9 | 33.5644 | 814.825 | 185.3 M | PP-YOLOE_plus is an upgraded version of the high-precision cloud-edge integrated model PP-YOLOE, developed by Baidu's PaddlePaddle vision team. By using the large-scale Objects365 dataset and optimizing preprocessing, it significantly enhances the model's end-to-end inference speed. |
| PP-YOLOE_plus-S | Inference Model/Trained Model | 43.7 | 16.8884 | 223.059 | 28.3 M | |
| RT-DETR-H | Inference Model/Trained Model | 56.3 | 114.814 | 3933.39 | 435.8 M | RT-DETR is the first real-time end-to-end object detector. The model features an efficient hybrid encoder to meet both model performance and throughput requirements, efficiently handling multi-scale features, and proposes an accelerated and optimized query selection mechanism to optimize the dynamics of decoder queries. RT-DETR supports flexible end-to-end inference speeds by using different decoders. |
| RT-DETR-L | Inference Model/Trained Model | 53.0 | 34.5252 | 1454.27 | 113.7 M |
| Model | Model Download Link | mAP(%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
|---|---|---|---|---|---|---|
| Cascade-FasterRCNN-ResNet50-FPN | Inference Model/Trained Model | 41.1 | - | - | 245.4 M | Cascade-FasterRCNN is an improved version of the Faster R-CNN object detection model. By coupling multiple detectors and optimizing detection results using different IoU thresholds, it addresses the mismatch problem between training and prediction stages, enhancing the accuracy of object detection. |
| Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN | Inference Model/Trained Model | 45.0 | - | - | 246.2 M | |
| CenterNet-DLA-34 | Inference Model/Trained Model | 37.6 | - | - | 75.4 M | CenterNet is an anchor-free object detection model that treats the keypoints of the object to be detected as a single pointโthe center point of its bounding box, and performs regression through these keypoints. |
| CenterNet-ResNet50 | Inference Model/Trained Model | 38.9 | - | - | 319.7 M | |
| DETR-R50 | Inference Model/Trained Model | 42.3 | 59.2132 | 5334.52 | 159.3 M | DETR is a transformer-based object detection model proposed by Facebook. It achieves end-to-end object detection without the need for predefined anchor boxes or NMS post-processing strategies. |
| FasterRCNN-ResNet34-FPN | Inference Model/Trained Model | 37.8 | - | - | 137.5 M | Faster R-CNN is a typical two-stage object detection model that first generates region proposals and then performs classification and regression on these proposals. Compared to its predecessors R-CNN and Fast R-CNN, Faster R-CNN's main improvement lies in the region proposal aspect, using a Region Proposal Network (RPN) to provide region proposals instead of traditional selective search. RPN is a Convolutional Neural Network (CNN) that shares convolutional features with the detection network, reducing the computational overhead of region proposals. |
| FasterRCNN-ResNet50-FPN | Inference Model/Trained Model | 38.4 | - | - | 148.1 M | |
| FasterRCNN-ResNet50-vd-FPN | Inference Model/Trained Model | 39.5 | - | - | 148.1 M | |
| FasterRCNN-ResNet50-vd-SSLDv2-FPN | Inference Model/Trained Model | 41.4 | - | - | 148.1 M | |
| FasterRCNN-ResNet50 | Inference Model/Trained Model | 36.7 | - | - | 120.2 M | |
| FasterRCNN-ResNet101-FPN | Inference Model/Trained Model | 41.4 | - | - | 216.3 M | |
| FasterRCNN-ResNet101 | Inference Model/Trained Model | 39.0 | - | - | 188.1 M | |
| FasterRCNN-ResNeXt101-vd-FPN | Inference Model/Trained Model | 43.4 | - | - | 360.6 M | |
| FasterRCNN-Swin-Tiny-FPN | Inference Model/Trained Model | 42.6 | - | - | 159.8 M | |
| FCOS-ResNet50 | Inference Model/Trained Model | 39.6 | 103.367 | 3424.91 | 124.2 M | FCOS is an anchor-free object detection model that performs dense predictions. It uses the backbone of RetinaNet and directly regresses the width and height of the target object on the feature map, predicting the object's category and centerness (the degree of offset of pixels on the feature map from the object's center), which is eventually used as a weight to adjust the object score. |
| PicoDet-L | Inference Model/Trained Model | 42.6 | 16.6715 | 169.904 | 20.9 M | PP-PicoDet is a lightweight object detection algorithm designed for full-size and wide-aspect-ratio targets, with a focus on mobile device computation. Compared to traditional object detection algorithms, PP-PicoDet boasts smaller model sizes and lower computational complexity, achieving higher speeds and lower latency while maintaining detection accuracy. |
| PicoDet-M | Inference Model/Trained Model | 37.5 | 16.2311 | 71.7257 | 16.8 M | |
| PicoDet-S | Inference Model/Trained Model | 29.1 | 14.097 | 37.6563 | 4.4 M | |
| PicoDet-XS | Inference Model/Trained Model | 26.2 | 13.8102 | 48.3139 | 5.7 M | |
| PP-YOLOE_plus-L | Inference Model/Trained Model | 52.9 | 33.5644 | 814.825 | 185.3 M | PP-YOLOE_plus is an iteratively optimized and upgraded version of PP-YOLOE, a high-precision cloud-edge integrated model developed by Baidu PaddlePaddle's Vision Team. By leveraging the large-scale Objects365 dataset and optimizing preprocessing, it significantly enhances the end-to-end inference speed of the model. |
| PP-YOLOE_plus-M | Inference Model/Trained Model | 49.8 | 19.843 | 449.261 | 82.3 M | |
| PP-YOLOE_plus-S | Inference Model/Trained Model | 43.7 | 16.8884 | 223.059 | 28.3 M | |
| PP-YOLOE_plus-X | Inference Model/Trained Model | 54.7 | 57.8995 | 1439.93 | 349.4 M | |
| RT-DETR-H | Inference Model/Trained Model | 56.3 | 114.814 | 3933.39 | 435.8 M | RT-DETR is the first real-time end-to-end object detector. It features an efficient hybrid encoder that balances model performance and throughput, efficiently processes multi-scale features, and introduces an accelerated and optimized query selection mechanism to dynamize decoder queries. RT-DETR supports flexible end-to-end inference speeds through the use of different decoders. |
| RT-DETR-L | Inference Model/Trained Model | 53.0 | 34.5252 | 1454.27 | 113.7 M | |
| RT-DETR-R18 | Inference Model/Trained Model | 46.5 | 19.89 | 784.824 | 70.7 M | |
| RT-DETR-R50 | Inference Model/Trained Model | 53.1 | 41.9327 | 1625.95 | 149.1 M | |
| RT-DETR-X | Inference Model/Trained Model | 54.8 | 61.8042 | 2246.64 | 232.9 M | |
| YOLOv3-DarkNet53 | Inference Model/Trained Model | 39.1 | 40.1055 | 883.041 | 219.7 M | YOLOv3 is a real-time end-to-end object detector that utilizes a unique single Convolutional Neural Network (CNN) to frame the object detection problem as a regression task, enabling real-time detection. The model employs multi-scale detection to enhance performance across different object sizes. |
| YOLOv3-MobileNetV3 | Inference Model/Trained Model | 31.4 | 18.6692 | 267.214 | 83.8 M | |
| YOLOv3-ResNet50_vd_DCN | Inference Model/Trained Model | 40.6 | 31.6276 | 856.047 | 163.0 M | |
| YOLOX-L | Inference Model/Trained Model | 50.1 | 185.691 | 1250.58 | 192.5 M | Building upon YOLOv3's framework, YOLOX significantly boosts detection performance in complex scenarios by incorporating Decoupled Head, Data Augmentation, Anchor Free, and SimOTA components. |
| YOLOX-M | Inference Model/Trained Model | 46.9 | 123.324 | 688.071 | 90.0 M | |
| YOLOX-N | Inference Model/Trained Model | 26.1 | 79.1665 | 155.59 | 3.4 M | |
| YOLOX-S | Inference Model/Trained Model | 40.4 | 184.828 | 474.446 | 32.0 M | |
| YOLOX-T | Inference Model/Trained Model | 32.9 | 102.748 | 212.52 | 18.1 M | |
| YOLOX-X | Inference Model/Trained Model | 51.8 | 227.361 | 2067.84 | 351.5 M |
Note: The precision metrics mentioned are based on the COCO2017 validation set mAP(0.5:0.95). All model GPU inference times are measured on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.
**Note:** Due to network issues, the above URL may not be accessible. If you need to access this link, please check the validity of the URL and try again. If the problem persists, it may be related to the link itself or the network connection.
Related methods, parameters, and explanations are as follows:
* `create_model` instantiates an object detection model (here, `PicoDet-S` is used as an example), and the specific explanations are as follows:
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
model_name |
Name of the model | str |
None | None |
model_dir |
Path to store the model | str |
None | None |
img_size |
Size of the input image; if not specified, the default configuration of the PaddleX official model will be used | int/list |
|
None |
threshold |
Threshold for filtering low-confidence prediction results; if not specified, the default configuration of the PaddleX official model will be used | float |
None | None |
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
input |
Data to be predicted, supporting multiple input types | Python Var/str/dict/list |
|
None |
batch_size |
Batch size | int |
Any integer | 1 |
threshold |
Threshold for filtering low-confidence prediction results; if not specified, the threshold parameter specified in create_model will be used. If create_model also does not specify it, the default configuration of the PaddleX official model will be used |
float |
None | None |
| Method | Method Description | Parameter | Parameter Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the results to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable, only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters, only effective when format_json is True |
False |
||
save_to_json() |
Save the results as a JSON file | save_path |
str |
The path to save the file. If it is a directory, the saved file name will be consistent with the input file name | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable, only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters, only effective when format_json is True |
False |
||
save_to_img() |
Save the results as an image file | save_path |
str |
The path to save the file. If it is a directory, the saved file name will be consistent with the input file name | None |
| Attribute | Attribute Description |
|---|---|
json |
Get the prediction result in json format |
img |
Get the visualization image in dict format |
The specific content of the validation result file is:
{
"done_flag": true,
"check_pass": true,
"attributes": {
"num_classes": 4,
"train_samples": 701,
"train_sample_paths": [
"check_dataset/demo_img/road839.png",
"check_dataset/demo_img/road363.png",
"check_dataset/demo_img/road148.png",
"check_dataset/demo_img/road237.png",
"check_dataset/demo_img/road733.png",
"check_dataset/demo_img/road861.png",
"check_dataset/demo_img/road762.png",
"check_dataset/demo_img/road515.png",
"check_dataset/demo_img/road754.png",
"check_dataset/demo_img/road173.png"
],
"val_samples": 176,
"val_sample_paths": [
"check_dataset/demo_img/road218.png",
"check_dataset/demo_img/road681.png",
"check_dataset/demo_img/road138.png",
"check_dataset/demo_img/road544.png",
"check_dataset/demo_img/road596.png",
"check_dataset/demo_img/road857.png",
"check_dataset/demo_img/road203.png",
"check_dataset/demo_img/road589.png",
"check_dataset/demo_img/road655.png",
"check_dataset/demo_img/road245.png"
]
},
"analysis": {
"histogram": "check_dataset/histogram.png"
},
"dataset_path": "./dataset/det_coco_examples",
"show_type": "image",
"dataset_type": "COCODetDataset"
}
In the above validation results, check_pass being True indicates that the dataset format meets the requirements. Explanations for other indicators are as follows:
attributes.num_classes: The number of classes in this dataset is 4;attributes.train_samples: The number of training samples in this dataset is 704;attributes.val_samples: The number of validation samples in this dataset is 176;attributes.train_sample_paths: A list of relative paths to the visualization images of training samples in this dataset;attributes.val_sample_paths: A list of relative paths to the visualization images of validation samples in this dataset;Additionally, the dataset verification also analyzes the distribution of sample numbers across all classes in the dataset and generates a histogram (histogram.png) for visualization:

(1) Dataset Format Conversion
Object detection supports converting datasets in VOC and LabelMe formats to COCO format.
Parameters related to dataset validation can be set by modifying the fields under CheckDataset in the configuration file. Examples of some parameters in the configuration file are as follows:
CheckDataset:convert:enable: Whether to perform dataset format conversion. Object detection supports converting VOC and LabelMe format datasets to COCO format. Default is False;src_dataset_type: If dataset format conversion is performed, the source dataset format needs to be set. Default is null, with optional values VOC, LabelMe, VOCWithUnlabeled, LabelMeWithUnlabeled;
For example, if you want to convert a LabelMe format dataset to COCO format, taking the following LabelMe format dataset as an example, you need to modify the configuration as follows:cd /path/to/paddlex
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/det_labelme_examples.tar -P ./dataset
tar -xf ./dataset/det_labelme_examples.tar -C ./dataset/
......
CheckDataset:
......
convert:
enable: True
src_dataset_type: LabelMe
......
Then execute the command:
python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/det_labelme_examples
Of course, the above parameters also support being set by appending command line arguments. Taking a LabelMe format dataset as an example:
python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/det_labelme_examples \
-o CheckDataset.convert.enable=True \
-o CheckDataset.convert.src_dataset_type=LabelMe
(2) Dataset Splitting
Parameters for dataset splitting can be set by modifying the fields under CheckDataset in the configuration file. Examples of some parameters in the configuration file are as follows:
CheckDataset:split:enable: Whether to re-split the dataset. When True, dataset splitting is performed. Default is False;train_percent: If the dataset is re-split, the percentage of the training set needs to be set. The type is any integer between 0-100, and it needs to ensure that the sum with val_percent is 100;val_percent: If the dataset is re-split, the percentage of the validation set needs to be set. The type is any integer between 0-100, and it needs to ensure that the sum with train_percent is 100;
For example, if you want to re-split the dataset with a 90% training set and a 10% validation set, you need to modify the configuration file as follows:......
CheckDataset:
......
split:
enable: True
train_percent: 90
val_percent: 10
......
Then execute the command:
python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/det_coco_examples
After dataset splitting is executed, the original annotation files will be renamed to xxx.bak in the original path.
The above parameters also support being set by appending command line arguments:
python main.py -c paddlex/configs/modules/object_detection/PicoDet-S.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/det_coco_examples \
-o CheckDataset.split.enable=True \
-o CheckDataset.split.train_percent=90 \
-o CheckDataset.split.val_percent=10
output. If you need to specify a save path, you can set it through the -o Global.output field in the configuration file.After completing the model training, all outputs are saved in the specified output directory (default is ./output/), typically including:
train_result.json: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
train.log: Training log file, recording changes in model metrics and loss during training;config.yaml: Training configuration file, recording the hyperparameter configuration for this training session;.pdparams, .pdema, .pdopt.pdstate, .pdiparams, .pdmodel: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;When evaluating the model, you need to specify the model weights file path. Each configuration file has a default weight save path built-in. If you need to change it, simply set it by appending a command line parameter, such as -o Evaluate.weight_path=./output/best_model/best_model.pdparams.
After completing the model evaluation, an evaluate_result.json file will be generated, which records the evaluation results, specifically whether the evaluation task was completed successfully and the model's evaluation metrics, including AP.