video_classification.en.md 38 KB


comments: true

General Video Classification Pipeline User Guide

1. Introduction to General Video Classification Pipeline

Video classification is a technology that assigns video clips to predefined categories. It is widely used in action recognition, event detection, and content recommendation. Video classification can identify various dynamic events and scenes, such as sports activities, natural phenomena, traffic conditions, etc., and classify them based on their characteristics. By using deep learning models, especially the combination of Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), video classification can automatically extract spatiotemporal features from videos and perform accurate classification. This technology has important applications in video surveillance, media retrieval, and personalized recommendation systems.

The general video classification pipeline is used to solve video classification tasks by extracting theme and category information from videos and outputting them as labels. This pipeline integrates the industry-renowned PP-TSM and PP-TSMv2 video classification systems, supporting the recognition of 400 video categories. Based on this pipeline, accurate classification of video content can be achieved, covering various fields such as media, security, education, and transportation. This pipeline also provides flexible service deployment options, supporting multiple programming languages on various hardware. Additionally, this pipeline offers secondary development capabilities, allowing you to train and fine-tune models on your own dataset, with seamless integration of the trained models.

The general video classification pipeline includes a video classification module. If you prioritize model accuracy, choose a model with higher accuracy; if you prioritize inference speed, choose a model with faster inference speed; if you prioritize storage size, choose a model with a smaller storage size.

ModelModel Download Link Top1 Acc(%) Model Storage Size (M) Description
PP-TSM-R50_8frames_uniformInference Model/Trained Model 74.36 93.4 M PP-TSM is a video classification model developed by Baidu PaddlePaddle's Vision Team. This model is optimized based on the ResNet-50 backbone network and undergoes model tuning in six aspects: data augmentation, network structure fine-tuning, training strategies, Batch Normalization (BN) layer optimization, pre-trained model selection, and model distillation. Under the center crop evaluation method, its accuracy on Kinetics-400 is improved by 3.95 points compared to the original paper's implementation.
PP-TSMv2-LCNetV2_8frames_uniformInference Model/Trained Model 71.71 22.5 M PP-TSMv2 is a lightweight video classification model optimized based on the CPU-oriented model PP-LCNetV2. It undergoes model tuning in seven aspects: backbone network and pre-trained model selection, data augmentation, TSM module tuning, input frame number optimization, decoding speed optimization, DML distillation, and LTA module. Under the center crop evaluation method, it achieves an accuracy of 75.16%, with an inference speed of only 456ms on the CPU for a 10-second video input.
PP-TSMv2-LCNetV2_16frames_uniformInference Model/Trained Model 73.11 22.5 M

Note: The above accuracy metrics refer to Top-1 Accuracy on the K400 validation set.

2. Quick Start

PaddleX supports experiencing the pipeline's effects locally using command line or Python.

Before using the general video classification pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Guide.

2.1 Command Line Experience

You can quickly experience the video classification pipeline with a single command. Use the test file and replace --input with your local path for prediction.

paddlex --pipeline video_classification \
    --input general_video_classification_001.mp4 \
    --topk 5 \
    --save_path ./output \
    --device gpu:0

The relevant parameter descriptions can be found in the parameter descriptions in 2.2 Integration via Python Script.

After running, the result will be printed to the terminal, as follows:

{'res': {'input_path': 'general_video_classification_001.mp4', 'class_ids': array([  0, ..., 162], dtype=int32), 'scores': [0.91997, 0.07052, 0.00237, 0.00214, 0.00158], 'label_names': ['abseiling', 'rock_climbing', 'climbing_tree', 'riding_mule', 'ice_climbing']}}

The explanation of the result parameters can refer to the result explanation in 2.2 Integration with Python Script.

The visualization results are saved under save_path, and the visualization result for video classification is as follows:

2.2 Integration with Python Script

  • The above command line is for quickly experiencing and viewing the effect. Generally speaking, in a project, it is often necessary to integrate through code. You can complete the rapid inference of the production line with just a few lines of code. The inference code is as follows:

    from paddlex import create_pipeline
    
    pipeline = create_pipeline(pipeline="video_classification")
    
    output = pipeline.predict("general_video_classification_001.mp4", topk=5)
    for res in output:
    res.print()
    res.save_to_video(save_path="./output/")
    res.save_to_json(save_path="./output/")
    

In the above Python script, the following steps are executed:

(1) The video classification pipeline object is instantiated via create_pipeline(). The specific parameter descriptions are as follows:

Parameter Parameter Description Parameter Type Default Value
pipeline The name of the pipeline or the path to the pipeline configuration file. If it is a pipeline name, it must be supported by PaddleX. str None
config Specific configuration information for the pipeline (if set simultaneously with the pipeline, it takes precedence over the pipeline, and the pipeline name must match the pipeline). dict[str, Any] None
device The inference device for the pipeline. It supports specifying the specific card number of the GPU, such as "gpu:0", other hardware card numbers, such as "npu:0", and CPU as "cpu". str gpu:0
use_hpip Whether to enable high-performance inference, which is only available when the pipeline supports high-performance inference. bool False

(2) The predict() method of the general video classification pipeline object is called for inference prediction. This method returns a generator. Here are the parameters and their descriptions for the predict() method:

Parameter Parameter Description Parameter Type Options Default Value
input The data to be predicted, supporting multiple input types (required). str|list
  • str: The local path of the video file, such as /root/data/video.mp4; URL link, such as the network URL of the video file: Example; Local directory, the directory must contain the videos to be predicted, such as the local path: /root/data/
  • List: The elements of the list must be of the above types, such as [\"/root/data/video1.mp4\", \"/root/data/video2.mp4\"], [\"/root/data1\", \"/root/data2\"]
  • None
    device The inference device for the pipeline str|None
    • CPU: For example, cpu indicates using the CPU for inference;
    • GPU: For example, gpu:0 indicates using the first GPU for inference;
    • NPU: For example, npu:0 indicates using the first NPU for inference;
    • XPU: For example, xpu:0 indicates using the first XPU for inference;
    • MLU: For example, mlu:0 indicates using the first MLU for inference;
    • DCU: For example, dcu:0 indicates using the first DCU for inference;
    • None: If set to None, the value initialized for the pipeline will be used by default. During initialization, the local GPU 0 will be prioritized. If it is not available, the CPU will be used.
    None
    topk The top topk classes and their corresponding classification probabilities in the prediction results. int|None
    • int: Any integer greater than 0.
    • None: If set to None, the value initialized for the pipeline will be used by default, which is 1.
    None

    (3) Process the prediction results. The prediction result for each sample is of the dict type and supports operations such as printing, saving as an image, and saving as a json file:

    Method Description Parameter Parameter Type Parameter Description Default Value
    print() Print the result to the terminal format_json bool Whether to format the output content using JSON indentation True
    indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
    ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True False
    save_to_json() Save the result as a JSON file save_path str Path to save the file. When it is a directory, the saved file name is consistent with the input file type naming None
    indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
    ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True False
    save_to_video() Save the result as a video file save_path str Path to save the file, supports directory or file path None
    • Calling the print() method will print the result to the terminal, with the printed content explained as follows:

      • input_path: (str) The input path of the video to be predicted
      • class_ids: (numpy.ndarray) List of IDs for video classification
      • scores: (List[float]) List of confidence scores for video classification
      • label_names: (List[str]) List of categories for video classification
    • Calling the save_to_json() method will save the above content to the specified save_path. If specified as a directory, the saved path will be save_path/{your_video_basename}_res.json; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, the numpy.array types will be converted to lists.

    • Calling the save_to_video() method will save the visualization results to the specified save_path. If specified as a directory, the saved path will be save_path/{your_video_basename}_res.{your_video_extension}; if specified as a file, it will be saved directly to that file. (The production line usually contains multiple result videos, so it is not recommended to specify a specific file path directly, as multiple videos will be overwritten and only the last video will be retained)

    • Additionally, it also supports obtaining visualized videos and prediction results through attributes, as follows:

    Attribute Attribute Description
    json Get the predicted json format result
    video Get the visualized video in dict format
    • The prediction result obtained by the json attribute is a dict type of data, with content consistent with the content saved by calling the save_to_json() method.
    • The prediction result returned by the video attribute is a dictionary type of data. The key is res, and the corresponding value is a tuple. The first element of the tuple is the visualized video array with dimensions (number of frames, video height, video width, number of channels); the second element is the frame rate.

    In addition, you can obtain the video classification production line configuration file and load the configuration file for prediction. You can execute the following command to save the result in my_path:

    paddlex --get_pipeline_config video_classification --save_path ./my_path
    

    If you have obtained the configuration file, you can customize the settings for the video classification pipeline. Simply modify the value of the pipeline parameter in the create_pipeline method to the path of the pipeline configuration file. An example is as follows:

    from paddlex import create_pipeline
    
    pipeline = create_pipeline(pipeline="./my_path/video_classification.yaml")
    
    output = pipeline.predict("general_video_classification_001.mp4", topk=5)
    for res in output:
        res.print()
        res.save_to_video(save_path="./output/")
        res.save_to_json(save_path="./output/")
    

    Note: The parameters in the configuration file are for pipeline initialization. If you wish to change the initialization parameters of the general video classification pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with --pipeline.

    3. Development Integration/Deployment

    If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.

    If you need to apply the pipeline directly to your Python project, you can refer to the example code in 2.2 Integration with Python Script.

    Additionally, PaddleX provides three other deployment methods, detailed as follows:

    🚀 High-Performance Inference: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. For this purpose, PaddleX provides a high-performance inference plugin, aimed at deeply optimizing the performance of model inference and pre/post-processing, significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to PaddleX High-Performance Inference Guide.

    ☁️ Service Deployment: Service deployment is a common form of deployment in actual production environments. By encapsulating the inference function as a service, clients can access these services via network requests to obtain inference results. PaddleX supports multiple pipeline service deployment solutions. For detailed pipeline service deployment procedures, please refer to PaddleX Service Deployment Guide.

    Below are the API references and multi-language service invocation examples for basic service deployment:

    API Reference

    For the main operations provided by the service:

    • The HTTP request method is POST.
    • Both the request body and response body are JSON data (JSON objects).
    • When the request is processed successfully, the response status code is 200, and the response body has the following attributes:
    Name Type Meaning
    errorCode integer Error code. Fixed at 0.
    errorMsg string Error description. Fixed at "Success".

    The response body may also have a result attribute, which is an object containing the operation result information.

    • When the request is not processed successfully, the response body has the following attributes:
    Name Type Meaning
    errorCode integer Error code. Same as the response status code.
    errorMsg string Error description.

    The main operations provided by the service are as follows:

    • infer

    Perform video classification.

    POST /video-classification

    • The attributes of the request body are as follows:
    Name Type Meaning Required
    video string The URL of the video file accessible by the server or the Base64 encoded result of the video file content. Yes
    inferenceParams object Inference parameters. No

    The attributes of inferenceParams are as follows:

    Name Type Meaning Required
    topK integer Only the top topK categories with the highest scores will be retained in the results. No
    • When the request is processed successfully, the result in the response body has the following attributes:
    Name Type Meaning
    categories array Video category information.
    video string The video classification result image. The video is in JPEG format and encoded in Base64.

    Each element in categories is an object with the following attributes:

    Name Type Meaning
    id integer Category ID.
    name string Category name.
    score number Category score.

    An example of result is as follows:

    {
    "categories": [
    {
    "id": 5,
    "name": "Rabbit",
    "score": 0.93
    }
    ],
    "video": "xxxxxx"
    }
    
    Multilingual Service Call Examples
    Python
    import base64
    import requests
    
    API_URL = "http://localhost:8080/video-classification" # Service URL
    video_path = "./demo.mp4"
    output_video_path = "./out.mp4"
    
    # Encode local video to Base64
    with open(video_path, "rb") as file:
        video_bytes = file.read()
        video_data = base64.b64encode(video_bytes).decode("ascii")
    
    payload = {"video": video_data}  # Base64 encoded file content or video URL
    
    # Call API
    response = requests.post(API_URL, json=payload)
    
    # Process API response
    assert response.status_code == 200
    result = response.json()["result"]
    with open(output_video_path, "wb") as file:
        file.write(base64.b64decode(result["video"]))
    print(f"Output video saved at {output_video_path}")
    print("\nCategories:")
    print(result["categories"])
    
    C++ #include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib #include "nlohmann/json.hpp" // https://github.com/nlohmann/json #include "base64.hpp" // https://github.com/tobiaslocker/base64 int main() { httplib::Client client("localhost:8080"); const std::string videoPath = "./demo.mp4"; const std::string outputImagePath = "./out.mp4"; httplib::Headers headers = { {"Content-Type", "application/json"} }; // Encode local video to Base64 std::ifstream file(videoPath, std::ios::binary | std::ios::ate); std::streamsize size = file.tellg(); file.seekg(0, std::ios::beg); std::vector<char> buffer(size); if (!file.read(buffer.data(), size)) { std::cerr << "Error reading file." << std::endl; return 1; } std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size()); std::string encodedImage = base64::to_base64(bufferStr); nlohmann::json jsonObj; jsonObj["video"] = encodedImage; std::string body = jsonObj.dump(); // Call API auto response = client.Post("/video-classification", headers, body, "application/json"); // Process API response if (response && response->status == 200) { nlohmann::json jsonResponse = nlohmann::json::parse(response->body); auto result = jsonResponse["result"]; encodedImage = result["video"]; std::string decodedString = base64::from_base64(encodedImage); std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end()); std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out); if (outputImage.is_open()) { outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size()); outputImage.close(); std::cout << "Output video saved at " << outPutImagePath << std::endl; } else { std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl; } auto categories = result["categories"]; std::cout << "\nCategories:" << std::endl; for (const auto& category : categories) { std::cout << category << std::endl; } } else { std::cout << "Failed to send HTTP request." << std::endl; return 1; } return 0; }
    Java
    import okhttp3.*;
    import com.fasterxml.jackson.databind.ObjectMapper;
    import com.fasterxml.jackson.databind.JsonNode;
    import com.fasterxml.jackson.databind.node.ObjectNode;
    
    import java.io.File;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.util.Base64;
    
    public class Main {
        public static void main(String[] args) throws IOException {
            String API_URL = "http://localhost:8080/video-classification"; // Service URL
            String videoPath = "./demo.mp4"; // Local video
            String outputImagePath = "./out.mp4"; // Output video
    
            // Encode local video to Base64
            File file = new File(videoPath);
            byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
            String videoData = Base64.getEncoder().encodeToString(fileContent);
    
            ObjectMapper objectMapper = new ObjectMapper();
            ObjectNode params = objectMapper.createObjectNode();
            params.put("video", videoData); // Base64 encoded file content or video URL
    
            // Create OkHttpClient instance
            OkHttpClient client = new OkHttpClient();
            MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
            RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
            Request request = new Request.Builder()
                    .url(API_URL)
                    .post(body)
                    .build();
    
            // Call API and process API response
            try (Response response = client.newCall(request).execute()) {
                if (response.isSuccessful()) {
                    String responseBody = response.body().string();
                    JsonNode resultNode = objectMapper.readTree(responseBody);
                    JsonNode result = resultNode.get("result");
                    String base64Image = result.get("video").asText();
                    JsonNode categories = result.get("categories");
    
                    byte[] videoBytes = Base64.getDecoder().decode(base64Image);
                    try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
                        fos.write(videoBytes);
                    }
                    System.out.println("Output video saved at " + outputImagePath);
                    System.out.println("\nCategories: " + categories.toString());
                } else {
                    System.err.println("Request failed with code: " + response.code());
                }
            }
        }
    }
    
    Go
    package main
    
    import (
        "bytes"
        "encoding/base64"
        "encoding/json"
        "fmt"
        "io/ioutil"
        "net/http"
    )
    
    func main() {
        API_URL := "http://localhost:8080/video-classification"
        videoPath := "./demo.mp4"
        outputImagePath := "./out.mp4"
    
        // Base64 encode the local video
        videoBytes, err := ioutil.ReadFile(videoPath)
        if err != nil {
            fmt.Println("Error reading video file:", err)
            return
        }
        videoData := base64.StdEncoding.EncodeToString(videoBytes)
    
        payload := map[string]string{"video": videoData} // Base64 encoded file content or video URL
        payloadBytes, err := json.Marshal(payload)
        if err != nil {
            fmt.Println("Error marshaling payload:", err)
            return
        }
    
        // Call the API
        client := &http.Client{}
        req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
        if err != nil {
            fmt.Println("Error creating request:", err)
            return
        }
    
        res, err := client.Do(req)
        if err != nil {
            fmt.Println("Error sending request:", err)
            return
        }
        defer res.Body.Close()
    
        // Handle the API response
        body, err := ioutil.ReadAll(res.Body)
        if err != nil {
            fmt.Println("Error reading response body:", err)
            return
        }
        type Response struct {
            Result struct {
                Image      string   `json:"video"`
                Categories []map[string]interface{} `json:"categories"`
            } `json:"result"`
        }
        var respData Response
        err = json.Unmarshal([]byte(string(body)), &respData)
        if err != nil {
            fmt.Println("Error unmarshaling response body:", err)
            return
        }
    
        outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
        if err != nil {
            fmt.Println("Error decoding base64 video data:", err)
            return
        }
        err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
        if err != nil {
            fmt.Println("Error writing video to file:", err)
            return
        }
        fmt.Printf("Image saved at %s.mp4\n", outputImagePath)
        fmt.Println("\nCategories:")
        for _, category := range respData.Result.Categories {
            fmt.Println(category)
        }
    }
    
    C#
    using System;
    using System.IO;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using System.Text;
    using System.Threading.Tasks;
    using Newtonsoft.Json.Linq;
    
    class Program
    {
        static readonly string API_URL = "http://localhost:8080/video-classification";
        static readonly string videoPath = "./demo.mp4";
        static readonly string outputImagePath = "./out.mp4";
    
        static async Task Main(string[] args)
        {
            var httpClient = new HttpClient();
    
            // Base64 encode the local video
            byte[] videoBytes = File.ReadAllBytes(videoPath);
            string video_data = Convert.ToBase64String(videoBytes);
    
            var payload = new JObject{ { "video", video_data } }; // Base64 encoded file content or video URL
            var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
    
            // Call the API
            HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
            response.EnsureSuccessStatusCode();
    
            // Handle the API response
            string responseBody = await response.Content.ReadAsStringAsync();
            JObject jsonResponse = JObject.Parse(responseBody);
    
            string base64Image = jsonResponse["result"]["video"].ToString();
            byte[] outputImageBytes = Convert.FromBase64String(base64Image);
    
            File.WriteAllBytes(outputImagePath, outputImageBytes);
            Console.WriteLine($"Output video saved at {outputImagePath}");
            Console.WriteLine("\nCategories:");
            Console.WriteLine(jsonResponse["result"]["categories"].ToString());
        }
    }
    

    Node.js

    const axios = require('axios');
    const fs = require('fs');

    const API_URL = 'http://localhost:8080/video-classification' const videoPath = './demo.mp4' const outputImagePath = "./out.mp4";

    let config = { method: 'POST', maxBodyLength: Infinity, url: API_URL, data: JSON.stringify({

    'video': encodeImageToBase64(videoPath)  // Base64 encoded file content or video URL
    

    }) };

    // Base64 encode the local video function encodeImageToBase64(filePath) { const bitmap = fs.readFileSync(filePath); return Buffer.from(bitmap).toString('base64'); }

    // Call the API axios.request(config) .then((response) => {

    // Process the API response
    const result = response.data[&quot;result&quot;];
    const videoBuffer = Buffer.from(result[&quot;video&quot;], 'base64');
    fs.writeFile(outputImagePath, videoBuffer, (err) =&gt; {
      if (err) throw err;
      console.log(`Output video saved at ${outputImagePath}`);
    });
    console.log(&quot;\nCategories:&quot;);
    console.log(result[&quot;categories&quot;]);
    

    }) .catch((error) => { console.log(error); });

    PHP

    <?php

    $API_URL = "http://localhost:8080/video-classification"; // Service URL $video_path = "./demo.mp4"; $output_video_path = "./out.mp4";

    // Base64 encode the local video $video_data = base64_encode(file_get_contents($video_path)); $payload = array("video" => $video_data); // Base64 encoded file content or video URL

    // Call the API $ch = curl_init($API_URL); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload)); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); $response = curl_exec($ch); curl_close($ch);

    // Process the API response $result = json_decode($response, true)["result"]; file_put_contents($output_video_path, base64_decode($result["video"])); echo "Output video saved at " . $output_video_path . "\n"; echo "\nCategories:\n"; print_r($result["categories"]); ?>


    📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities directly on the user's device, allowing it to process data locally without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed procedures on edge deployment, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate method to deploy the model pipeline according to your needs and proceed with subsequent AI application integration.

    4. Secondary Development

    If the default model weights provided by the general video classification pipeline are not satisfactory in terms of accuracy or speed for your specific scenario, you can attempt to fine-tune the existing model using your own domain-specific or application-specific data to improve the recognition performance of the general video classification pipeline in your scenario.

    4.1 Model Fine-Tuning

    Since the general video classification pipeline only includes a video classification module, if the performance of the pipeline is not up to expectations, you can analyze the videos with poor recognition and refer to the corresponding fine-tuning tutorial links in the table below for model fine-tuning.

    Scenario Fine-Tuning Module Fine-Tuning Reference Link
    Inaccurate video classification Video Classification Module Link

    4.2 Model Application

    After completing the fine-tuning with your private dataset, you will obtain the local model weight file.

    If you need to use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the path to the fine-tuned model weights with the corresponding location in the pipeline configuration file:

    ...
    SubModules:
      VideoClassification:
        module_name: video_classification
        model_name: PP-TSMv2-LCNetV2_8frames_uniform
        model_dir: null # Replace with the fine-tuned video classification model weights path
        batch_size: 1
        topk: 1
    
    ...
    

    Subsequently, refer to the command-line method or Python script method in the local experience to load the modified production line configuration file.

    5. Multi-Hardware Support

    PaddleX supports a variety of mainstream hardware devices, including NVIDIA GPU, Kunlunxin XPU, Ascend NPU, and Cambricon MLU. Simply modify the --device parameter to seamlessly switch between different hardware devices.

    For example, if you use Ascend NPU for video classification in the production line, the Python command used is:

    paddlex --pipeline video_classification \
        --input general_video_classification_001.mp4 \
        --topk 5 \
        --save_path ./output \
        --device npu:0
    

    Of course, you can also specify the hardware device when calling create_pipeline() or predict() in a Python script.

    If you want to use the General Video Classification Production Line on a wider variety of hardware, please refer to the PaddleX Multi-Device Usage Guide.