--- comments: true --- # Video Classification Module Development Tutorial ## I. Overview The Video Classification Module is a crucial component in a computer vision system, responsible for categorizing input videos. The performance of this module directly impacts the accuracy and efficiency of the entire computer vision system. The Video Classification Module typically receives videos as input and then, through deep learning or other machine learning algorithms, classifies them into predefined categories based on their characteristics and content. For example, in an action recognition system, the Video Classification Module may need to classify input videos into categories such as "Abseiling," "Air Drumming," "Answering Questions," etc. The classification results of the Video Classification Module are output for use by other modules or systems. ## II. List of Supported Models
| Model | Model Download Link | Top1 Acc(%) | Model Storage Size (M) | Description |
|---|---|---|---|---|
| PPTSM_ResNet50_k400_8frames_uniform | Inference Model/Trained Model | 74.36 | 93.4 M | PP-TSM is a video classification model developed by Baidu PaddlePaddle's Vision Team. This model is optimized based on the ResNet-50 backbone network and undergoes model tuning in six aspects: data augmentation, network structure fine-tuning, training strategies, Batch Normalization (BN) layer optimization, pre-trained model selection, and model distillation. Under the center crop evaluation method, its accuracy on Kinetics-400 is improved by 3.95 points compared to the original paper's implementation. |
| PPTSMv2_LCNet_k400_8frames_uniform | Inference Model/Trained Model | 71.71 | 22.5 M | PP-TSMv2 is a lightweight video classification model optimized based on the CPU-oriented model PP-LCNetV2. It undergoes model tuning in seven aspects: backbone network and pre-trained model selection, data augmentation, TSM module tuning, input frame number optimization, decoding speed optimization, DML distillation, and LTA module. Under the center crop evaluation method, it achieves an accuracy of 75.16%, with an inference speed of only 456ms on the CPU for a 10-second video input. |
| PPTSMv2_LCNet_k400_16frames_uniform | Inference Model/Trained Model | 73.11 | 22.5 M |
Note: The above accuracy metrics refer to Top-1 Accuracy on the K400 validation set. All model GPU inference times are based on NVIDIA Tesla T4 machines, with precision type FP32. CPU inference speeds are based on Intel® Xeon® Gold 5117 CPU @ 2.00GHz, with 8 threads and precision type FP32.
## III. Quick Integration > ❗ Before quick integration, please install the PaddleX wheel package. For detailed instructions, refer to the [PaddleX Local Installation Guide](../../../installation/installation.en.md). After installing the wheel package, you can complete video classification module inference with just a few lines of code. You can switch between models in this module freely, and you can also integrate the model inference of the video classification module into your project. Before running the following code, please download the [demo video](https://paddle-model-ecology.bj.bcebos.com/paddlex/videos/demo_video/general_video_classification_001.mp4) to your local machine. ```python from paddlex import create_model model = create_model("PPTSMv2_LCNet_k400_8frames_uniform") output = model.predict("general_video_classification_001.mp4", batch_size=1) for res in output: res.print(json_format=False) res.save_to_video("./output/") res.save_to_json("./output/res.json") ``` For more information on using PaddleX's single-model inference APIs, please refer to the [PaddleX Single-Model Python Script Usage Instructions](../../instructions/model_python_API.en.md). ## IV. Custom Development If you are seeking higher accuracy from existing models, you can use PaddleX's custom development capabilities to develop better video classification models. Before using PaddleX to develop video classification models, please ensure that you have installed the relevant model training plugins for video classification in PaddleX. The installation process can be found in the custom development section of the [PaddleX Local Installation Guide](../../../installation/installation.en.md). ### 4.1 Data Preparation Before model training, you need to prepare the dataset for the corresponding task module. PaddleX provides data validation functionality for each module, and only data that passes data validation can be used for model training. Additionally, PaddleX provides demo datasets for each module, which you can use to complete subsequent development. If you wish to use your own private dataset for subsequent model training, please refer to the [PaddleX Video Classification Task Module Data Annotation Guide](../../../data_annotations/video_modules/video_classification.en.md). #### 4.1.1 Demo Data Download You can use the following command to download the demo dataset to a specified folder: ```bash cd /path/to/paddlex wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/k400_examples.tar -P ./dataset tar -xf ./dataset/k400_examples.tar -C ./dataset/ ``` #### 4.1.2 Data Validation One command is all you need to complete data validation: ```bash python main.py -c paddlex/configs/video_classification/PPTSMv2_LCNet_k400_8frames_uniform.yaml \ -o Global.mode=check_dataset \ -o Global.dataset_dir=./dataset/k400_examples ``` After executing the above command, PaddleX will validate the dataset and summarize its basic information. If the command runs successfully, it will print `Check dataset passed !` in the log. The validation results file is saved in `./output/check_dataset_result.json`, and related outputs are saved in the `./output/check_dataset` directory in the current directory, including visual examples of sample images and sample distribution histograms.{ "done_flag": true,
"check_pass": true,
"attributes": {
"label_file": "../../dataset/k400_examples/label.txt",
"num_classes": 5,
"train_samples": 250,
"train_sample_paths": [
"check_dataset/../../dataset/k400_examples/videos/Wary2ON3aSo_000079_000089.mp4",
"check_dataset/../../dataset/k400_examples/videos/_LHpfh0rXjk_000012_000022.mp4",
"check_dataset/../../dataset/k400_examples/videos/dyoiNbn80q0_000039_000049.mp4",
"check_dataset/../../dataset/k400_examples/videos/brBw6cFwock_000049_000059.mp4",
"check_dataset/../../dataset/k400_examples/videos/-o4X5Z_Isyc_000085_000095.mp4",
"check_dataset/../../dataset/k400_examples/videos/e24p-4W3TiU_000011_000021.mp4",
"check_dataset/../../dataset/k400_examples/videos/2Grg_zwmYZE_000004_000014.mp4",
"check_dataset/../../dataset/k400_examples/videos/aZY_0UqRNgA_000098_000108.mp4",
"check_dataset/../../dataset/k400_examples/videos/WZlsi4nQHOo_000025_000035.mp4",
"check_dataset/../../dataset/k400_examples/videos/rRh-lkFj4Tw_000001_000011.mp4"
],
"val_samples": 50,
"val_sample_paths": [
"check_dataset/../../dataset/k400_examples/videos/7Mga5kywfU4.mp4",
"check_dataset/../../dataset/k400_examples/videos/w5UCdQ2NmfY.mp4",
"check_dataset/../../dataset/k400_examples/videos/Qbo_tnzfjOY.mp4",
"check_dataset/../../dataset/k400_examples/videos/LgW8pMDtylE.mkv",
"check_dataset/../../dataset/k400_examples/videos/BY0883Dvt1c.mp4",
"check_dataset/../../dataset/k400_examples/videos/PHQkMPu-KNo.mp4",
"check_dataset/../../dataset/k400_examples/videos/7LSJ2Ryv1a8.mp4",
"check_dataset/../../dataset/k400_examples/videos/oBYZWvlI8Uk.mp4",
"check_dataset/../../dataset/k400_examples/videos/dpn2eg9O3Rs.mkv",
"check_dataset/../../dataset/k400_examples/videos/hXtsZAaZ3yc.mkv"
]
},
"analysis": {
"histogram": "check_dataset/histogram.png"
},
"dataset_path": "./dataset/k400_examples",
"show_type": "video",
"dataset_type": "VideoClsDataset"
}
The above validation results, with check_pass being True, indicate that the dataset format meets the requirements. Explanations for other indicators are as follows:
attributes.num_classes: The number of classes in this dataset is 5;attributes.train_samples: The number of training set samples in this dataset is 250;attributes.val_samples: The number of validation set samples in this dataset is 50;attributes.train_sample_paths: A list of relative paths to the visual samples in the training set of this dataset;attributes.val_sample_paths: A list of relative paths to the visual samples in the validation set of this dataset;Additionally, the dataset validation analyzes the sample number distribution across all classes in the dataset and generates a distribution histogram (histogram.png):

(1) Dataset Format Conversion
Image classification does not currently support data conversion.
(2) Dataset Splitting
The parameters for dataset splitting can be set by modifying the fields under CheckDataset in the configuration file. The following are example explanations for some of the parameters in the configuration file:
CheckDataset:split:enable: Whether to re-split the dataset. When set to True, the dataset format will be converted. The default is False;train_percent: If re-splitting the dataset, you need to set the percentage of the training set, which should be an integer between 0-100, ensuring that the sum with val_percent equals 100;For example, if you want to re-split the dataset with a 90% training set and a 10% validation set, you need to modify the configuration file as follows:
......
CheckDataset:
......
split:
enable: True
train_percent: 90
val_percent: 10
......
Then execute the command:
python main.py -c paddlex/configs/video_classification/PPTSMv2_LCNet_k400_8frames_uniform.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/k400_examples
After the data splitting is executed, the original annotation files will be renamed to xxx.bak in the original path.
These parameters also support being set through appending command line arguments:
python main.py -c paddlex/configs/video_classification/PPTSMv2_LCNet_k400_8frames_uniform.yaml \
-o Global.mode=check_dataset \
-o Global.dataset_dir=./dataset/k400_examples \
-o CheckDataset.split.enable=True \
-o CheckDataset.split.train_percent=90 \
-o CheckDataset.split.val_percent=10
output. If you need to specify a save path, you can set it through the -o Global.output field in the configuration file.After completing the model training, all outputs are saved in the specified output directory (default is ./output/), typically including:
train_result.json: Training result record file, recording whether the training task was completed normally, as well as the output weight metrics, related file paths, etc.;
train.log: Training log file, recording changes in model metrics and loss during training;config.yaml: Training configuration file, recording the hyperparameter configuration for this training session;.pdparams, .pdema, .pdopt.pdstate, .pdiparams, .pdmodel: Model weight-related files, including network parameters, optimizer, EMA, static graph network parameters, static graph network structure, etc.;When evaluating the model, you need to specify the model weight file path. Each configuration file has a default weight save path built-in. If you need to change it, simply set it by appending a command line parameter, such as -o Evaluate.weight_path=./output/best_model/best_model.pdparams.
After completing the model evaluation, an evaluate_result.json file will be generated, which records the evaluation results. Specifically, it records whether the evaluation task was completed successfully and the model's evaluation metrics, including val.top1, val.top5;