--- comments: true --- # Face Feature Module Usage Tutorial ## I. Overview Face feature models typically take standardized face images processed through detection, extraction, and keypoint correction as input. These models extract highly discriminative facial features from these images for subsequent modules, such as face matching and verification tasks. ## II. Supported Model List > The inference time only includes the model inference time and does not include the time for pre- or post-processing.
| Model | Model Download Link | Output Feature Dimension | Acc (%) AgeDB-30/CFP-FP/LFW |
GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
Model Storage Size (MB) | Description |
|---|---|---|---|---|---|---|---|
| MobileFaceNet | Inference Model/Training Model | 128 | 96.28/96.71/99.58 | 3.31 / 0.73 | 5.93 / 1.30 | 4.1 | Face feature model trained on MobileFaceNet with MS1Mv3 dataset |
| ResNet50_face | Inference Model/Training Model | 512 | 98.12/98.56/99.77 | 6.12 / 3.11 | 15.85 / 9.44 | 87.2 | Face feature model trained on ResNet50 with MS1Mv3 dataset |
| Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
|---|---|---|---|
| Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
| High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
model_name |
Name of the model | str |
None | None |
model_dir |
Path to store the model | str |
None | None |
device |
The device used for model inference | str |
It supports specifying specific GPU card numbers, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". | gpu:0 |
flip |
Whether to perform flipped inference; if True, the model will infer the horizontally flipped input image and fuse the results of both inferences to improve the accuracy of face features | bool |
None | False |
use_hpip |
Whether to enable the high-performance inference plugin | bool |
None | False |
hpi_config |
High-performance inference configuration | dict | None |
None | None |
| Parameter | Parameter Description | Parameter Type | Options | Default Value |
|---|---|---|---|---|
input |
Data to be predicted, supporting multiple input types | Python Var/str/list |
|
None |
batch_size |
Batch size | int |
Any integer | 1 |
| Method | Method Description | Parameter | Parameter Type | Parameter Description | Default Value |
|---|---|---|---|---|---|
print() |
Print the results to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable, only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters, only effective when format_json is True |
False |
||
save_to_json() |
Save the results as a JSON file | save_path |
str |
The path to save the file. If it is a directory, the saved file name will be consistent with the input file name | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable, only effective when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode. If set to True, all non-ASCII characters will be escaped; False retains the original characters, only effective when format_json is True |
False |
| Attribute | Attribute Description |
|---|---|
json |
Get the prediction result in json format |
The specific content of the validation result file is:
{
"done_flag": true,
"check_pass": true,
"attributes": {
"train_label_file": "../../dataset/face_rec_examples/train/label.txt",
"train_num_classes": 995,
"train_samples": 1000,
"train_sample_paths": [
"check_dataset/demo_img/01378592.jpg",
"check_dataset/demo_img/04331410.jpg",
"check_dataset/demo_img/03485713.jpg",
"check_dataset/demo_img/02382123.jpg",
"check_dataset/demo_img/01722397.jpg",
"check_dataset/demo_img/02682349.jpg",
"check_dataset/demo_img/00272794.jpg",
"check_dataset/demo_img/03151987.jpg",
"check_dataset/demo_img/01725764.jpg",
"check_dataset/demo_img/02580369.jpg"
],
"val_label_file": "../../dataset/face_rec_examples/val/pair_label.txt",
"val_num_classes": 2,
"val_samples": 500,
"val_sample_paths": [
"check_dataset/demo_img/Don_Carcieri_0001.jpg",
"check_dataset/demo_img/Eric_Fehr_0001.jpg",
"check_dataset/demo_img/Harry_Kalas_0001.jpg",
"check_dataset/demo_img/Francis_Ford_Coppola_0001.jpg",
"check_dataset/demo_img/Amer_al-Saadi_0001.jpg",
"check_dataset/demo_img/Sergei_Ivanov_0001.jpg",
"check_dataset/demo_img/Erin_Runnion_0003.jpg",
"check_dataset/demo_img/Bill_Stapleton_0001.jpg",
"check_dataset/demo_img/Daniel_Bruehl_0001.jpg",
"check_dataset/demo_img/Clare_Short_0004.jpg"
]
},
"analysis": {},
"dataset_path": "./dataset/face_rec_examples",
"show_type": "image",
"dataset_type": "ClsDataset"
}
The verification results mentioned above indicate that check_pass being True means the dataset format meets the requirements. Details of other indicators are as follows:
attributes.train_num_classes: The number of classes in this training dataset is 995;attributes.val_num_classes: The number of classes in this validation dataset is 2;attributes.train_samples: The number of training samples in this dataset is 1000;attributes.val_samples: The number of validation samples in this dataset is 500;attributes.train_sample_paths: The list of relative paths to the visualization images of training samples in this dataset;The face feature module does not support data format conversion or dataset splitting.
output. To specify a save path, use the -o Global.output field in the configuration file../output/). Typically, the following outputs are included:train_result.json: A file that records the training results, indicating whether the training task was successfully completed, and includes metrics, paths to related files, etc.train.log: A log file that records changes in model metrics, loss variations, and other details during the training process.config.yaml: A configuration file that logs the hyperparameter settings for the current training session..pdparams, .pdema, .pdopt.pdstate, .pdiparams, .json: Files related to model weights, including network parameters, optimizer, EMA (Exponential Moving Average), static graph network parameters, and static graph network structure..json file) from protobuf(the former.pdmodel file) to be compatible with PIR and more flexible and scalable.python main.py -c paddlex/configs/modules/face_detection/MobileFaceNet.yaml \
-o Global.mode=evaluate \
-o Global.dataset_dir=./dataset/face_rec_examples
Similar to model training, the process involves the following steps:
* Specify the path to the `.yaml` configuration file for the modelοΌhere it's `MobileFaceNet.yaml`οΌ
* Set the mode to model evaluation: `-o Global.mode=evaluate`
* Specify the path to the validation dataset: `-o Global.dataset_dir`
Other related parameters can be configured by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For detailed information, please refer to [PaddleX Common Configuration Parameters for Models](../../instructions/config_parameters_common.en.md)γ