The 3D multimodal fusion detection module is a key component in the fields of computer vision and autonomous driving, responsible for locating and marking the 3D coordinates and detection box information of regions containing specific targets in images or videos. The performance of this module directly affects the accuracy and efficiency of the entire vision or autonomous driving perception system. The 3D multimodal fusion detection module typically outputs 3D bounding boxes of target regions, which are then passed as inputs to the target recognition module for further processing.
| Model | Model Download Link | mAP(%) | NDS | Introduction | |
|---|---|---|---|---|---|
| BEVFusion | Inference Model/Training Model | 53.9 | 60.9 | BEVFusion is a multimodal fusion model in the Bird's Eye View (BEV) perspective. It uses two branches to process data from different modalities to obtain features of lidar and camera in the BEV perspective. The camera branch uses the LSS (Lift, Splat, and Scatter) bottom-up approach to explicitly generate image BEV features, while the lidar branch uses a classic point cloud detection network. Finally, the BEV features of the two modalities are aligned and fused for application in detection heads or segmentation heads. | |
| Parameter | Parameter Description | Parameter Type | Optional | Default Value |
|---|---|---|---|---|
model_name |
Name of the model | str |
No | BEVFusion |
model_dir |
Path where the model is stored | str |
No | None |
The model_name must be specified. After specifying model_name, the default model parameters built into PaddleX will be used. If model_dir is specified, the user-defined model will be used.
The predict() method of the 3D detection model is called for inference prediction. The parameters of the predict() method are input and batch_size, with specific explanations as follows:
| Parameter | Parameter Description | Parameter Type | Optional | Default Value |
|---|---|---|---|---|
input |
Data to be predicted, supporting multiple input types | str/list |
|
None |
batch_size |
Batch size | int |
Any integer | 1 |
json file:| Method | Method Description | Parameter | Parameter Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the result to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. It is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. It is only effective when format_json is True |
False |
||
save_to_json() |
Save the result as a JSON-formatted file | save_path |
str |
The path to save the file. When it is a directory, the saved file name will be consistent with the input file name | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. It is only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether non-ASCII characters are escaped to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters. It is only effective when format_json is True |
False |
| Attribute | Attribute Description |
|---|---|
json |
Get the prediction result in json format |
For more information on the usage of PaddleX single-model inference API, please refer to PaddleX Single-Model Python Script Usage Instructions.
If you are pursuing higher precision in existing models, you can utilize the secondary development capabilities of PaddleX to develop better object detection models. Before using PaddleX to develop object detection models, please make sure to install the object detection model training plugin for PaddleX. The installation process can be referred to in the PaddleX Local Installation Guide.
Before training a model, you need to prepare the dataset for the corresponding task module. PaddleX provides a data validation feature for each module, and only data that passes the validation can be used for model training. In addition, PaddleX offers a Demo dataset for each module, and you can complete subsequent development based on the official Demo data.
You can refer to the following command to download the Demo dataset to the specified folder:
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/nuscenes_demo.tar -P ./dataset
tar -xf ./dataset/nuscenes_demo.tar -C ./dataset/
Data validation can be completed with a single command:
python main.py -c paddlex/configs/modules/3d_bev_detection/BEVFusion.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/nuscenes_demo
The specific content of the verification result file is:
{
"done_flag": true,
"check_pass": true,
"attributes": {
"num_classes": 11,
"train_mate": [
{
"sample_idx": "f9878012c3f6412184c294c13ba4bac3",
"lidar_path": "./data/nuscenes/samples/LIDAR_TOP/n008-2018-05-21-11-06-59-0400__LIDAR_TOP__1526915243047392.pcd.bin",
"image_paths" [
"./data/nuscenes/samples/CAM_FRONT_LEFT/n008-2018-05-21-11-06-59-0400__CAM_FRONT_LEFT__1526915243004917.jpg",
"./data/nuscenes/samples/CAM_FRONT/n008-2018-05-21-11-06-59-0400__CAM_FRONT__1526915243012465.jpg",
"./data/nuscenes/samples/CAM_FRONT_RIGHT/n008-2018-05-21-11-06-59-0400__CAM_FRONT_RIGHT__1526915243019956.jpg",
"./data/nuscenes/samples/CAM_BACK_RIGHT/n008-2018-05-21-11-06-59-0400__CAM_BACK_RIGHT__1526915243027813.jpg",
"./data/nuscenes/samples/CAM_BACK/n008-2018-05-21-11-06-59-0400__CAM_BACK__1526915243037570.jpg",
"./data/nuscenes/samples/CAM_BACK_LEFT/n008-2018-05-21-11-06-59-0400__CAM_BACK_LEFT__1526915243047295.jpg"
]
},
],
"val_mate": [
{
"sample_idx": "30e55a3ec6184d8cb1944b39ba19d622",
"lidar_path": "./data/nuscenes/samples/LIDAR_TOP/n015-2018-07-11-11-54-16+0800__LIDAR_TOP__1531281439800013.pcd.bin",
"image_paths": [
"./data/nuscenes/samples/CAM_FRONT_LEFT/n015-2018-07-11-11-54-16+0800__CAM_FRONT_LEFT__1531281439754844.jpg",
"./data/nuscenes/samples/CAM_FRONT/n015-2018-07-11-11-54-16+0800__CAM_FRONT__1531281439762460.jpg",
"./data/nuscenes/samples/CAM_FRONT_RIGHT/n015-2018-07-11-11-54-16+0800__CAM_FRONT_RIGHT__1531281439770339.jpg",
"./data/nuscenes/samples/CAM_BACK_RIGHT/n015-2018-07-11-11-54-16+0800__CAM_BACK_RIGHT__1531281439777893.jpg",
"./data/nuscenes/samples/CAM_BACK/n015-2018-07-11-11-54-16+0800__CAM_BACK__1531281439787525.jpg",
"./data/nuscenes/samples/CAM_BACK_LEFT/n015-2018-07-11-11-54-16+0800__CAM_BACK_LEFT__1531281439797423.jpg"
]
},
]
},
"analysis": {
"histogram": "check_dataset/histogram.png"
},
"dataset_path": "/workspace/bevfusion/Paddle3D/data/nuscenes","show_type": "txt",
"dataset_type": "NuscenesMMDataset"
}
In the verification results above, check_pass being true indicates that the dataset format meets the requirements.
After you complete the dataset verification, you can convert the dataset format or re-split the training/validation ratio by modifying the configuration file or adding hyperparameters.
The 3D multimodal fusion detection module does not support dataset format conversion or dataset splitting.
A single command can complete the model training. Taking the training of the 3D multimodal fusion detection model BEVFusion as an example:
python main.py -c paddlex/configs/modules/3d_bev_detection/BEVFusion.yaml \
-o Global.mode=train \
-o Global.dataset_dir=./dataset/nuscenes_demo \
.yaml configuration file (here it is bevf_pp_2x8_1x_nusc.yaml. When training other models, you need to specify the corresponding configuration file. The correspondence between models and configuration files can be found in PaddleX Model List (CPU/GPU)).-o Global.mode=train-o Global.dataset_dir
Other related parameters can be set by modifying the fields under Global and Train in the .yaml configuration file, or by appending parameters in the command line. For example, to specify training on the first 2 GPUs: -o Global.device=gpu:0,1; to set the number of training epochs to 10: -o Train.epochs_iters=10. For more modifiable parameters and their detailed explanations, refer to the configuration file instructions for the corresponding model task module PaddleX Common Model Configuration Parameters.output. If you need to specify a save path, you can set it through the -o Global.output field in the configuration file.After model training is completed, all outputs are saved in the specified output directory (default is ./output/), typically including the following:
train_result.json: The training result record file, which logs whether the training task was completed normally, as well as the weight metrics, related file paths, etc.
train.log: The training log file, which records the changes in model metrics and loss during the training process.config.yaml: The training configuration file, which records the hyperparameter settings for this training session..pdparams, .pdopt, .pdiparams, .pdmodel: Model weight-related files, including network parameters, optimizer, static graph network parameters, and static graph network structure, etc.After completing model training, you can evaluate the specified model weight file on the validation set to verify the model's accuracy. With PaddleX, model evaluation can be completed with a single command:
python main.py -c paddlex/configs/modules/3d_bev_detection/BEVFusion.yaml \
-o Global.mode=evaluate \
-o Global.dataset_dir=./dataset/nuscenes_demo \
Similar to model training, the following steps are required:
.yaml configuration file (here it is bevf_pp_2x8_1x_nusc.yaml)-o Global.mode=evaluate-o Global.dataset_dir
Other related parameters can be set by modifying the fields under Global and Evaluate in the .yaml configuration file. For details, please refer to PaddleX General Model Configuration File Parameter Description.When evaluating the model, the path of the model weight file needs to be specified. Each configuration file has a default weight save path built-in. If you need to change it, you can simply set it by appending a command-line parameter, such as -o Evaluate.weight_path=./output/best_model/best_model.pdparams.
After the model evaluation is completed, an evaluate_result.json file will be generated, which records the evaluation results. Specifically, it records whether the evaluation task was completed normally and the evaluation metrics of the model, including mAP and NDS.
After completing the training and evaluation of the model, you can use the trained model weights for inference prediction or integrate them into Python.
To perform inference prediction via the command line, you only need the following command. Before running the code below, please download the sample data to your local machine.
Note: The link above may not be accessible due to network issues or an invalid URL. Please check the validity of the link and try again if necessary.python main.py -c paddlex/configs/modules/3d_bev_detection/BEVFusion.yaml \
-o Global.mode=predict \
-o Predict.model_dir="./output/best_model/inference" \
-o Predict.input="nuscenes_demo_infer.tar"
Similar to model training and evaluation, the following steps are required:
.yaml configuration file path for the model (here it is bevf_pp_2x8_1x_nusc.yaml)-o Global.mode=predict-o Predict.model_dir="./output/best_model/inference"-o Predict.input="..."Other related parameters can be set by modifying the fields under Global and Predict in the .yaml configuration file. For details, please refer to PaddleX Common Model Configuration File Parameter Description.
The model can be directly integrated into the PaddleX production line or into your own project.
1.Production Line Integration
The 3D multimodal fusion detection module can be integrated into the 3D detection production line of PaddleX. Simply replacing the model path will complete the model update for the target detection module in the relevant production line. In production line integration, you can deploy your model using high-performance deployment and service-oriented deployment.
2.Module Integration
The weights you generate can be directly integrated into the 3D multimodal fusion detection module. You can refer to the Python example code in [Quick Integration](). Just replace the model with the path of the model you have trained.