Before using Python scripts for rapid inference on model pipelines, please ensure you have installed PaddleX following the PaddleX Local Installation Guide.
Taking the image classification pipeline as an example, the usage is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline("image_classification")
output = pipeline.predict("https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_image_classification_001.jpg", batch_size=1, topk=5)
for res in output:
res.print(json_format=False)
res.save_to_img("./output/")
res.save_to_json("./output/res.json")
In short, there are only three steps:
create_pipeline() method to instantiate the prediction model pipeline object;predict() method of the prediction model pipeline object for inference;print(), save_to_xxx() and other related methods to print or save the prediction results.create_pipeline()create_pipeline: Instantiates the prediction model pipeline object;
pipeline: str type, the pipeline name or the local pipeline configuration file path, such as "image_classification", "/path/to/image_classification.yaml";device: str type, used to set the inference device. If set for GPU, you can specify the card number, such as "cpu", "gpu:2". By default, using 0 id GPU if available, otherwise CPU;pp_option: PaddlePredictorOption type, used to set the inference engine. Please refer to 4-Inference Backend Configuration for more details;BasePredictor type.predict() Method of the Prediction Model Pipeline Objectpredict: Uses the defined prediction model pipeline to predict input data;
input: Any type, supporting str representing the path of the file to be predicted, or a directory containing files to be predicted, or a network URL; for CV tasks, supports numpy.ndarray representing image data; for TS tasks, supports pandas.DataFrame type data; also supports lists of the above types;generator, returns the prediction result of one sample per call;The prediction results of the pipeline support to be accessed and saved, which can be achieved through corresponding attributes or methods, specifically as follows:
str: str type representation of the prediction result;
str type, string representation of the prediction result;json: Prediction result in JSON format;
dict type;img: Visualization image of the prediction result;
PIL.Image type;html: HTML representation of the prediction result;
str type;more attrs: The prediction result of different pipelines support different representation methods. Please refer to the specific pipeline tutorial documentation for details.print(): Outputs the prediction result. Note that when the prediction result is not convenient for direct output, relevant content will be omitted;
json_format: bool type, default is False, indicating that json formatting is not used;indent: int type, default is 4, valid when json_format is True, indicating the indentation level for json formatting;ensure_ascii: bool type, default is False, valid when json_format is True;save_to_json(): Saves the prediction result as a JSON file. Note that when the prediction result contains data that cannot be serialized in JSON, automatic format conversion will be performed to achieve serialization and saving;
save_path: str type, the path to save the result;indent: int type, default is 4, valid when json_format is True, indicating the indentation level for json formatting;ensure_ascii: bool type, default is False, valid when json_format is True;save_to_img(): Visualizes the prediction result and saves it as an image;
save_path: str type, the path to save the result.save_to_csv(): Saves the prediction result as a CSV file;
save_path: str type, the path to save the result.save_to_html(): Saves the prediction result as an HTML file;
save_path: str type, the path to save the result.save_to_xlsx(): Saves the prediction result as an XLSX file;
save_path: str type, the path to save the result.more funcs: The prediction result of different pipelines support different saving methods. Please refer to the specific pipeline tutorial documentation for details.PaddleX supports configuring the inference backend through PaddlePredictorOption. Relevant APIs are as follows:
device: Inference device;
str. Device types include 'gpu', 'cpu', 'npu', 'xpu', 'mlu'. When using an accelerator card, you can specify the card number, e.g., 'gpu:0' for GPU 0. The default is 'gpu:0';str type, the currently set inference device.run_mode: Inference backend;
str type, options include 'paddle', 'trt_fp32', 'trt_fp16', 'trt_int8', 'mkldnn', 'mkldnn_bf16'. 'mkldnn' is only selectable when the inference device is 'cpu'. The default is 'paddle';str type, the currently set inference backend.cpu_threads: Number of CPU threads for the acceleration library, only valid when the inference device is 'cpu';
int type for the number of CPU threads for the acceleration library during CPU inference;int type, the currently set number of threads for the acceleration library.get_support_run_mode: Get supported inference backend configurations;
get_support_device: Get supported device types for running;
get_device: Get the currently set device;
str type.
```