The prosperity of the PaddlePaddle ecosystem is inseparable from the contributions of developers and users. We warmly welcome you to provide more device compatibility for PaddleX and greatly appreciate your feedback.
Currently, PaddleX supports Intel/Apple M series CPU, NVIDIA GPUs, XPU, Ascend NPU, Hygon DCU, and MLU. If the device you wish to support is not within the current scope, you can contribute by following the methods below.
The PaddlePaddle deep learning framework provides multiple integration solutions, including operator development and mapping, subgraph and whole graph integration, deep learning compiler backend integration, and open neural network format conversion. Device vendors can flexibly choose based on their chip architecture design and software stack maturity. For specific details, please refer to PaddlePaddle Custom Device Integration Solutions.
Since PaddleX is based on the PaddlePaddle model library, after the device completes the integration into the PaddlePaddle backend, select the corresponding devkit to submit code based on the models already supported by the device to ensure that the relevant devkits are adapted to the corresponding device. Refer to the contribution guides for each devkit:
After completing the device integration into PaddlePaddle and the PaddleCV devkits, you need to update the device recognition-related code and documentation in PaddleX.
Since different AI computing device supports different model lists, PaddleX internally determines whether a specific model is supported on the hardware based on a whitelist. The relevant code is located in XXX_WHITELIST in PaddleX Model Whitelist. Please configure this list according to actual support conditions.
Additionally, update the check_supported_device_type function in Device Validation.
Update the list of supported AI computing device in PaddleX. The relevant code is located in SUPPORTED_DEVICE_TYPE in PaddleX Hardware Support List.
If special environment variables need to be set when using the relevant device, you can modify the device environment setup code. The relevant code is located in the set_env_for_device_type function in PaddleX Environment Variable Settings.
When creating a Predictor, the PaddleX checks whether the device is supported. The relevant code is located in SUPPORT_DEVICE in PaddleX Predictor Option.
PaddleX's inference capability is provided based on the Paddle Inference Predictor. When creating a Predictor, you need to select different device based on device information and create passes. The relevant code is located in the _create function in PaddleX Predictor Creation.
TODO
Please update the PaddleX multi-devices user guide and add the newly supported device information to the documentation. Both Chinese and English versions need to be updated. The Chinese version is PaddleX多硬件使用指南, and the English version is PaddleX Multi-Devices Usage Guide.
Please provide device-related installation tutorials in both Chinese and English. The Chinese version can refer to 昇腾 NPU 飞桨安装教程, and the English version can refer to Ascend NPU PaddlePaddle Installation Tutorial.
Please provide a list of models supported by the device in both Chinese and English. The Chinese version can refer to PaddleX模型列表(昇腾 NPU), and the English version can refer to PaddleX Model List (Huawei Ascend NPU).
When you complete the device adaptation work, please submit a Pull Request to PaddleX with relevant information. We will validate the model and merge the relevant code after confirmation.
The relevant PR needs to provide information on reproducing model accuracy, including at least the following:
The software versions used to validate model accuracy, including but not limited to:
Paddle version
PaddleCustomDevice version (if any)
The branch of PaddleX or the corresponding devkit
The machine environment used to validate model accuracy, including but not limited to:
Chip model
Operating system version
Device driver version
Operator library version, etc.