| Model | Top-1 Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time | Model Size (M) | Description |
|---|---|---|---|---|---|
| CLIP_vit_base_patch16_224 | 85.36 | 13.1957 | 285.493 | 306.5 M | CLIP is an image classification model based on the correlation between vision and language. It adopts contrastive learning and pre-training methods to achieve unsupervised or weakly supervised image classification, especially suitable for large-scale datasets. By mapping images and texts into the same representation space, the model learns general features, exhibiting good generalization ability and interpretability. With relatively good training errors, it performs well in many downstream tasks. |
| CLIP_vit_large_patch14_224 | 88.1 | 51.1284 | 1131.28 | 1.04 G | |
| ConvNeXt_base_224 | 83.84 | 12.8473 | 1513.87 | 313.9 M | The ConvNeXt series of models were proposed by Meta in 2022, based on the CNN architecture. This series of models builds upon ResNet, incorporating the advantages of SwinTransformer, including training strategies and network structure optimization ideas, to improve the pure CNN architecture network. It explores the performance limits of convolutional neural networks. The ConvNeXt series of models possesses many advantages of convolutional neural networks, including high inference efficiency and ease of migration to downstream tasks. |
| ConvNeXt_base_384 | 84.90 | 31.7607 | 3967.05 | 313.9 M | |
| ConvNeXt_large_224 | 84.26 | 26.8103 | 2463.56 | 700.7 M | |
| ConvNeXt_large_384 | 85.27 | 66.4058 | 6598.92 | 700.7 M | |
| ConvNeXt_small | 83.13 | 9.74075 | 1127.6 | 178.0 M | |
| ConvNeXt_tiny | 82.03 | 5.48923 | 672.559 | 104.1 M | |
| FasterNet-L | 83.5 | 23.4415 | - | 357.1 M | FasterNet is a neural network designed to improve runtime speed. Its key improvements are as follows: 1. Re-examined popular operators and found that low FLOPS mainly stem from frequent memory accesses, especially in depthwise convolutions; 2. Proposed Partial Convolution (PConv) to extract image features more efficiently by reducing redundant computations and memory accesses; 3. Launched the FasterNet series of models based on PConv, a new design scheme that achieves significantly higher runtime speeds on various devices without compromising model task performance. |
| FasterNet-M | 83.0 | 21.8936 | - | 204.6 M | |
| FasterNet-S | 81.3 | 13.0409 | - | 119.3 M | |
| FasterNet-T0 | 71.9 | 12.2432 | - | 15.1 M | |
| FasterNet-T1 | 75.9 | 11.3562 | - | 29.2 M | |
| FasterNet-T2 | 79.1 | 10.703 | - | 57.4 M | |
| MobileNetV1_x0_5 | 63.5 | 1.86754 | 7.48297 | 4.8 M | MobileNetV1 is a network released by Google in 2017 for mobile devices or embedded devices. This network decomposes traditional convolution operations into depthwise separable convolutions, which are a combination of Depthwise convolution and Pointwise convolution. Compared to traditional convolutional networks, this combination can significantly reduce the number of parameters and computations. Additionally, this network can be used for image classification and other vision tasks. |
| MobileNetV1_x0_25 | 51.4 | 1.83478 | 4.83674 | 1.8 M | |
| MobileNetV1_x0_75 | 68.8 | 2.57903 | 10.6343 | 9.3 M | |
| MobileNetV1_x1_0 | 71.0 | 2.78781 | 13.98 | 15.2 M | |
| MobileNetV2_x0_5 | 65.0 | 4.94234 | 11.1629 | 7.1 M | MobileNetV2 is a lightweight network proposed by Google following MobileNetV1. Compared to MobileNetV1, MobileNetV2 introduces Linear bottlenecks and Inverted residual blocks as the basic structure of the network. By stacking these basic modules extensively, the network structure of MobileNetV2 is formed. Finally, it achieves higher classification accuracy with only half the FLOPs of MobileNetV1. |
| MobileNetV2_x0_25 | 53.2 | 4.50856 | 9.40991 | 5.5 M | |
| MobileNetV2_x1_0 | 72.2 | 6.12159 | 16.0442 | 12.6 M | |
| MobileNetV2_x1_5 | 74.1 | 6.28385 | 22.5129 | 25.0 M | |
| MobileNetV2_x2_0 | 75.2 | 6.12888 | 30.8612 | 41.2 M | |
| MobileNetV3_large_x0_5 | 69.2 | 6.31302 | 14.5588 | 9.6 M | MobileNetV3 is a NAS-based lightweight network proposed by Google in 2019. To further enhance performance, relu and sigmoid activation functions are replaced with hard_swish and hard_sigmoid activation functions, respectively. Additionally, some improvement strategies specifically designed to reduce network computations are introduced. |
| MobileNetV3_large_x0_35 | 64.3 | 5.76207 | 13.9041 | 7.5 M | |
| MobileNetV3_large_x0_75 | 73.1 | 8.41737 | 16.9506 | 14.0 M | |
| MobileNetV3_large_x1_0 | 75.3 | 8.64112 | 19.1614 | 19.5 M | |
| MobileNetV3_large_x1_25 | 76.4 | 8.73358 | 22.1296 | 26.5 M | |
| MobileNetV3_small_x0_5 | 59.2 | 5.16721 | 11.2688 | 6.8 M | |
| MobileNetV3_small_x0_35 | 53.0 | 5.22053 | 11.0055 | 6.0 M | |
| MobileNetV3_small_x0_75 | 66.0 | 5.39831 | 12.8313 | 8.5 M | |
| MobileNetV3_small_x1_0 | 68.2 | 6.00993 | 12.9598 | 10.5 M | |
| MobileNetV3_small_x1_25 | 70.7 | 6.9589 | 14.3995 | 13.0 M | |
| MobileNetV4_conv_large | 83.4 | 12.5485 | 51.6453 | 125.2 M | MobileNetV4 is an efficient architecture specifically designed for mobile devices. Its core lies in the introduction of the UIB (Universal Inverted Bottleneck) module, a unified and flexible structure that integrates IB (Inverted Bottleneck), ConvNeXt, FFN (Feed Forward Network), and the latest ExtraDW (Extra Depthwise) module. Alongside UIB, Mobile MQA, a customized attention block for mobile accelerators, was also introduced, achieving up to 39% significant acceleration. Furthermore, MobileNetV4 introduces a novel Neural Architecture Search (NAS) scheme to enhance the effectiveness of the search process. |
| MobileNetV4_conv_medium | 79.9 | 9.65509 | 26.6157 | 37.6 M | |
| MobileNetV4_conv_small | 74.6 | 5.24172 | 11.0893 | 14.7 M | |
| MobileNetV4_hybrid_large | 83.8 | 20.0726 | 213.769 | 145.1 M | |
| MobileNetV4_hybrid_medium | 80.5 | 19.7543 | 62.2624 | 42.9 M | |
| PP-HGNet_base | 85.0 | 14.2969 | 327.114 | 249.4 M | PP-HGNet (High Performance GPU Net) is a high-performance backbone network developed by Baidu PaddlePaddle's vision team, tailored for GPU platforms. This network combines the fundamentals of VOVNet with learnable downsampling layers (LDS Layer), incorporating the advantages of models such as ResNet_vd and PPHGNet. On GPU platforms, this model achieves higher accuracy compared to other SOTA models at the same speed. Specifically, it outperforms ResNet34-0 by 3.8 percentage points and ResNet50-0 by 2.4 percentage points. Under the same SLSD conditions, it ultimately surpasses ResNet50-D by 4.7 percentage points. Additionally, at the same level of accuracy, its inference speed significantly exceeds that of mainstream Vision Transformers. |
| PP-HGNet_small | 81.51 | 5.50661 | 119.041 | 86.5 M | |
| PP-HGNet_tiny | 79.83 | 5.22006 | 69.396 | 52.4 M | |
| PP-HGNetV2-B0 | 77.77 | 6.53694 | 23.352 | 21.4 M | PP-HGNetV2 (High Performance GPU Network V2) is the next-generation version of Baidu PaddlePaddle's PP-HGNet, featuring further optimizations and improvements upon its predecessor. It pushes the limits of NVIDIA's "Accuracy-Latency Balance," significantly outperforming other models with similar inference speeds in terms of accuracy. It demonstrates strong performance across various label classification and evaluation scenarios. |
| PP-HGNetV2-B1 | 79.18 | 6.56034 | 27.3099 | 22.6 M | |
| PP-HGNetV2-B2 | 81.74 | 9.60494 | 43.1219 | 39.9 M | |
| PP-HGNetV2-B3 | 82.98 | 11.0042 | 55.1367 | 57.9 M | |
| PP-HGNetV2-B4 | 83.57 | 9.66407 | 54.2462 | 70.4 M | |
| PP-HGNetV2-B5 | 84.75 | 15.7091 | 115.926 | 140.8 M | |
| PP-HGNetV2-B6 | 86.30 | 21.226 | 255.279 | 268.4 M | |
| PP-LCNet_x0_5 | 63.14 | 3.67722 | 6.66857 | 6.7 M | PP-LCNet is a lightweight backbone network developed by Baidu PaddlePaddle's vision team. It enhances model performance without increasing inference time, significantly surpassing other lightweight SOTA models. |
| PP-LCNet_x0_25 | 51.86 | 2.65341 | 5.81357 | 5.5 M | |
| PP-LCNet_x0_35 | 58.09 | 2.7212 | 6.28944 | 5.9 M | |
| PP-LCNet_x0_75 | 68.18 | 3.91032 | 8.06953 | 8.4 M | |
| PP-LCNet_x1_0 | 71.32 | 3.84845 | 9.23735 | 10.5 M | |
| PP-LCNet_x1_5 | 73.71 | 3.97666 | 12.3457 | 16.0 M | |
| PP-LCNet_x2_0 | 75.18 | 4.07556 | 16.2752 | 23.2 M | |
| PP-LCNet_x2_5 | 76.60 | 4.06028 | 21.5063 | 32.1 M | |
| PP-LCNetV2_base | 77.05 | 5.23428 | 19.6005 | 23.7 M | The PP-LCNetV2 image classification model is the next-generation version of PP-LCNet, self-developed by Baidu PaddlePaddle's vision team. Based on PP-LCNet, it has undergone further optimization and improvements, primarily utilizing re-parameterization strategies to combine depthwise convolutions with varying kernel sizes and optimizing pointwise convolutions, Shortcuts, etc. Without using additional data, the PPLCNetV2_base model achieves over 77% Top-1 Accuracy on the ImageNet dataset for image classification, while maintaining an inference time of less than 4.4 ms on Intel CPU platforms. |
| PP-LCNetV2_large | 78.51 | 6.78335 | 30.4378 | 37.3 M | |
| PP-LCNetV2_small | 73.97 | 3.89762 | 13.0273 | 14.6 M | |
| ResNet18_vd | 72.3 | 3.53048 | 31.3014 | 41.5 M | The ResNet series of models were introduced in 2015, winning the ILSVRC2015 competition with a top-5 error rate of 3.57%. This network innovatively proposed residual structures, which are stacked to construct the ResNet network. Experiments have shown that using residual blocks can effectively improve convergence speed and accuracy. |
| ResNet18 | 71.0 | 2.4868 | 27.4601 | 41.5 M | |
| ResNet34_vd | 76.0 | 5.60675 | 56.0653 | 77.3 M | |
| ResNet34 | 74.6 | 4.16902 | 51.925 | 77.3 M | |
| ResNet50_vd | 79.1 | 10.1885 | 68.446 | 90.8 M | |
| ResNet50 | 76.5 | 9.62383 | 64.8135 | 90.8 M | |
| ResNet101_vd | 80.2 | 20.0563 | 124.85 | 158.4 M | |
| ResNet101 | 77.6 | 19.2297 | 121.006 | 158.4 M | |
| ResNet152_vd | 80.6 | 29.6439 | 181.678 | 214.3 M | |
| ResNet152 | 78.3 | 30.0461 | 177.707 | 214.2 M | |
| ResNet200_vd | 80.9 | 39.1628 | 235.185 | 266.0 M | |
| StarNet-S1 | 73.6 | 9.895 | 23.0465 | 11.2 M | StarNet focuses on exploring the untapped potential of "star operations" (i.e., element-wise multiplication) in network design. It reveals that star operations can map inputs to high-dimensional, nonlinear feature spaces, a process akin to kernel tricks but without the need to expand the network size. Consequently, StarNet, a simple yet powerful prototype network, is further proposed, demonstrating exceptional performance and low latency under compact network structures and limited computational resources. |
| StarNet-S2 | 74.8 | 7.91279 | 21.9571 | 14.3 M | |
| StarNet-S3 | 77.0 | 10.7531 | 30.7656 | 22.2 M | |
| StarNet-S4 | 79.0 | 15.2868 | 43.2497 | 28.9 M | |
| SwinTransformer_base_patch4_window7_224 | 83.37 | 16.9848 | 383.83 | 310.5 M | SwinTransformer is a novel vision Transformer network that can serve as a general-purpose backbone for computer vision tasks. SwinTransformer consists of a hierarchical Transformer structure represented by shifted windows. Shifted windows restrict self-attention computations to non-overlapping local windows while allowing cross-window connections, thereby enhancing network performance. |
| SwinTransformer_base_patch4_window12_384 | 84.17 | 37.2855 | 1178.63 | 311.4 M | |
| SwinTransformer_large_patch4_window7_224 | 86.19 | 27.5498 | 689.729 | 694.8 M | |
| SwinTransformer_large_patch4_window12_384 | 87.06 | 74.1768 | 2105.22 | 696.1 M | |
| SwinTransformer_small_patch4_window7_224 | 83.21 | 16.3982 | 285.56 | 175.6 M | |
| SwinTransformer_tiny_patch4_window7_224 | 81.10 | 8.54846 | 156.306 | 100.1 M |