--- comments: true --- # Rotated Object Detection Module Usage Tutorial ## I. Overview Rotated object detection is a derivative of the object detection module, specifically designed for detecting rotated objects. Rotated bounding boxes (Rotated Bounding Boxes) are commonly used for detecting rectangles with angle information, where the width and height of the rectangle are no longer parallel to the image coordinate axes. Compared to horizontal bounding boxes, rotated bounding boxes generally include less background information. Rotated box detection is often used in scenarios such as remote sensing. ## II. Supported Model List
| Model | Model Download Link | mAP(%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Storage Size (M) | Introduction |
|---|---|---|---|---|---|---|
| PP-YOLOE-R-L | Inference Model/Training Model | 78.14 | 20.7039 | 157.942 | 211.0 M | PP-YOLOE-R is an efficient single-stage Anchor-free rotated box detection model. Based on PP-YOLOE, PP-YOLOE-R introduces a series of useful designs to improve detection accuracy with minimal parameters and computational cost. |
Note: The above accuracy metrics are on the DOTA validation set mAP(0.5:0.95)。All model GPU inference times are based on an NVIDIA TRX2080 Ti machine, with precision type F16, and CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz, with 8 threads and precision type FP32.
> ❗ The above listed are the rotated object detection models currently supported by paddleX,actually PaddleDetection supports10rotated object detection models, For a detailed model list, please refer to PaddleDetection ## III. Quick Integration > ❗ Before quick integration, please install the PaddleX wheel package. For details, please refer to [PaddleX Local Installation Tutorial](../../../installation/installation.en.md) After completing the installation of the wheel package, a few lines of code can complete the inference of the rotated object detection module. You can switch models under this module at will, and you can also integrate the model inference of the rotated object detection module into your project. Before running the following code, please download the [sample image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/rotated_object_detection_001.png) to your local machine. ```python from paddlex import create_model model = create_model("PP-YOLOE-R-L") output = model.predict("rotated_object_detection_001.png", batch_size=1) for res in output: res.print() res.save_to_img("./output/") res.save_to_json("./output/res.json") ``` After running, the result obtained is: ```bash {'res': "{'input_path': 'rotated_object_detection_001.png', 'boxes': [{'cls_id': 4, 'label': 'small-vehicle', 'score': 0.7513620853424072, 'coordinate': [92.72234, 763.36676, 84.7699, 749.9725, 116.207375, 731.8547, 124.15982, 745.2489]}, {'cls_id': 4, 'label': 'small-vehicle', 'score': 0.7284387350082397, 'coordinate': [348.60703, 177.85127, 332.80432, 149.83975, 345.37347, 142.95677, 361.17618, 170.96828]}, {'cls_id': 11, 'label': 'roundabout', 'score': 0.7909174561500549, 'coordinate': [535.02216, 697.095, 201.49803, 608.4738, 292.2446, 276.9634, 625.76874, 365.5845]}]}"} ``` The meanings of the parameters in the running results are as follows: - `input_path`: The path of the input image to be predicted. - `boxes`: Information about each predicted object. - `cls_id`: Class ID. - `label`: Class name. - `score`: Prediction score. - `coordinate`: Coordinates of the predicted bounding box, in the format[x1, y1, x2, y2, x3, y3, x4, y4].
The visualization image is as follows:
Note: Due to network issues, the parsing of the above URL may not have been successful. If you need the content of this webpage, please check the validity of the URL and try again.
Related methods and parameter explanations are as follows:
* `create_model` instantiates a rotated object detection model (using `PP-YOLOE-R_L` as an example). The specific explanations are as follows:
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
model_name |
The name of the model | str |
None | None |
model_dir |
The storage path of the model | str |
None | None |
threshold |
The threshold for filtering low-score objects | float/None/dict |
None | None |
img_size |
The resolution used by the model for prediction | int/tuple/None |
None | None |
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
input |
Data to be predicted, supporting multiple input types | Python Var/str/dict/list |
|
None |
batch_size |
Batch size | int |
Any integer | 1 |
threshold |
The threshold for filtering low-score objects | float/dict/None |
|
None |
| Method | Method Description | Parameter | Parameter Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the results to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data and make it more readable. This is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
||
save_to_json() |
Save the results as a file in JSON format | save_path |
str |
The file path for saving. When it is a directory, the saved file name will be consistent with the input file name | None |
indent |
int |
Specify the indentation level to beautify the output JSON data and make it more readable. This is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. This is only effective when format_json is True |
False |
||
save_to_img() |
Save the results as a file in image format | save_path |
str |
The file path for saving. When it is a directory, the saved file name will be consistent with the input file name | None |
| Attribute | Attribute Description |
|---|---|
json |
Get the prediction results in json format |
img |
Get the visualization image in dict format |
The specific content of the verification result file is:
{
"done_flag": true,
"check_pass": true,
"attributes": {
"num_classes": 15,
"train_samples": 1892,
"train_sample_paths": [
"check_dataset\/demo_img\/P2610__1.0__0___0.png",
"check_dataset\/demo_img\/P1137__1.0__0___0.png",
"check_dataset\/demo_img\/P1122__1.0__5888___1648.png",
"check_dataset\/demo_img\/P0543__1.0__0___0.png",
"check_dataset\/demo_img\/P0518__1.0__0___91.png",
"check_dataset\/demo_img\/P0961__1.0__1648___87.png",
"check_dataset\/demo_img\/P1732__1.0__0___824.png",
"check_dataset\/demo_img\/P2766__1.0__4421___0.png",
"check_dataset\/demo_img\/P2582__1.0__674___725.png",
"check_dataset\/demo_img\/P1529__1.0__2976___1648.png"
],
"val_samples": 473,
"val_sample_paths": [
"check_dataset\/demo_img\/P2342__1.0__890___0.png",
"check_dataset\/demo_img\/P1386__1.0__2472___1648.png",
"check_dataset\/demo_img\/P0961__1.0__824___87.png",
"check_dataset\/demo_img\/P1651__1.0__824___824.png",
"check_dataset\/demo_img\/P1529__1.0__824___2976.png",
"check_dataset\/demo_img\/P0961__1.0__4944___87.png",
"check_dataset\/demo_img\/P0725__1.0__634___0.png",
"check_dataset\/demo_img\/P1679__1.0__1648___1648.png",
"check_dataset\/demo_img\/P2726__1.0__824___1578.png",
"check_dataset\/demo_img\/P0457__1.0__379___0.png",
]
},
"analysis": {
"histogram": "check_dataset/histogram.png"
},
"dataset_path": "./dataset/DOTA-sampled200_crop1024_data",
"show_type": "image",
"dataset_type": "COCODetDataset"
}
In the above verification result, check_pass is true, indicating that the dataset format meets the requirements. The explanations for other indicators are as follows:
attributes.num_classes:The number of categories in this dataset is 15;attributes.train_samples:The number of training set samples in this dataset is 1892;attributes.val_samples:The number of validation set samples in this dataset is 473;attributes.train_sample_paths:The relative path list of visualized training set sample images in this dataset;attributes.val_sample_paths:The relative path list of visualized validation set sample images in this dataset;Additionally, the dataset verification also analyzes the distribution of sample quantities for all categories in the dataset and draws a distribution histogram (histogram.png):

(1)Dataset Format Conversion
Rotated object detection does not support dataset format conversion, only standard DOTA COCO data format(2)Dataset Splitting
The parameters for dataset splitting can be set by modifying the fields under CheckDataset in the configuration file. Some example explanations for the parameters in the configuration file are as follows:
CheckDataset:split:enable: Whether to re-split the dataset, set to True to convert the dataset format, default is False;train_percent: If re-splitting the dataset, you need to set the percentage of the training set, which is any integer between 0-100, and needs to ensure that the sum with val_percent is 100;val_percent: If re-splitting the dataset, you need to set the percentage of the validation set, which is any integer between 0-100, and needs to ensure that the sum with train_percent is 100;
For example, if you want to re-split the dataset with 90% for the training set and 10% for the validation set, you need to modify the configuration file as follows:
......
CheckDataset:
......
split:
enable: True
train_percent: 90
val_percent: 10
......
Then execute the command:
python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data
After the dataset splitting is executed, the original annotation files will be renamed to xxx.bak.
The above parameters also support setting through adding command line parameters:
python main.py -c paddlex/configs/modules/rotated_object_detection/PP-YOLOE-R-L.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/DOTA-sampled200_crop1024_data \
-o CheckDataset.split.enable=True \
-o CheckDataset.split.train_percent=90 \
-o CheckDataset.split.val_percent=10
output. If you need to specify a save path, you can set it through the -o Global.output field in the configuration file.After completing the model training, all outputs are saved in the specified output directory (default is ./output/), typically including:
train_result.json: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
train.log: Training log file, recording changes in model metrics and loss during training;config.yaml: Training configuration file, recording the hyperparameter configuration for this training session;.pdparams, .pdema, .pdopt.pdstate, .pdiparams, .pdmodel: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;When evaluating the model, you need to specify the model weights file path. Each configuration file has a default weight save path built-in. If you need to change it, simply set it by appending a command line parameter, such as -o Evaluate.weight_path=./output/best_model/best_model.pdparams.
After completing the model evaluation, an evaluate_result.json file will be generated, which records the evaluation results, specifically whether the evaluation task was completed successfully and the model's evaluation metrics, including AP.