| Model | Model Download Link | Output Feature Dimension | Acc (%) AgeDB-30/CFP-FP/LFW |
GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | Description |
|---|---|---|---|---|---|---|---|
| MobileFaceNet | Inference Model/Trained Model | 128 | 96.28/96.71/99.58 | 5.7 | 101.6 | 4.1 | Face feature model trained on MobileFaceNet with MS1Mv3 dataset |
| ResNet50_face | Inference Model/Trained Model | 512 | 98.12/98.56/99.77 | 8.7 | 200.7 | 87.2 | Face feature model trained on ResNet50 with MS1Mv3 dataset |
Note: The above accuracy metrics are Accuracy scores measured on the AgeDB-30, CFP-FP, and LFW datasets, respectively. All model GPU inference times are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.
The specific content of the validation result file is:
{
"done_flag": true,
"check_pass": true,
"attributes": {
"train_label_file": "../../dataset/face_rec_examples/train/label.txt",
"train_num_classes": 995,
"train_samples": 1000,
"train_sample_paths": [
"check_dataset/demo_img/01378592.jpg",
"check_dataset/demo_img/04331410.jpg",
"check_dataset/demo_img/03485713.jpg",
"check_dataset/demo_img/02382123.jpg",
"check_dataset/demo_img/01722397.jpg",
"check_dataset/demo_img/02682349.jpg",
"check_dataset/demo_img/00272794.jpg",
"check_dataset/demo_img/03151987.jpg",
"check_dataset/demo_img/01725764.jpg",
"check_dataset/demo_img/02580369.jpg"
],
"val_label_file": "../../dataset/face_rec_examples/val/pair_label.txt",
"val_num_classes": 2,
"val_samples": 500,
"val_sample_paths": [
"check_dataset/demo_img/Don_Carcieri_0001.jpg",
"check_dataset/demo_img/Eric_Fehr_0001.jpg",
"check_dataset/demo_img/Harry_Kalas_0001.jpg",
"check_dataset/demo_img/Francis_Ford_Coppola_0001.jpg",
"check_dataset/demo_img/Amer_al-Saadi_0001.jpg",
"check_dataset/demo_img/Sergei_Ivanov_0001.jpg",
"check_dataset/demo_img/Erin_Runnion_0003.jpg",
"check_dataset/demo_img/Bill_Stapleton_0001.jpg",
"check_dataset/demo_img/Daniel_Bruehl_0001.jpg",
"check_dataset/demo_img/Clare_Short_0004.jpg"
]
},
"analysis": {},
"dataset_path": "./dataset/face_rec_examples",
"show_type": "image",
"dataset_type": "ClsDataset"
}
The verification results mentioned above indicate that check_pass being True means the dataset format meets the requirements. Details of other indicators are as follows:
attributes.train_num_classes: The number of classes in this training dataset is 995;attributes.val_num_classes: The number of classes in this validation dataset is 2;attributes.train_samples: The number of training samples in this dataset is 1000;attributes.val_samples: The number of validation samples in this dataset is 500;attributes.train_sample_paths: The list of relative paths to the visualization images of training samples in this dataset;The face feature module does not support data format conversion or dataset splitting.
output. To specify a save path, use the -o Global.output field in the configuration file../output/). Typically, the following outputs are included:train_result.json: A file that records the training results, indicating whether the training task was successfully completed, and includes metrics, paths to related files, etc.train.log: A log file that records changes in model metrics, loss variations, and other details during the training process.config.yaml: A configuration file that logs the hyperparameter settings for the current training session..pdparams, .pdema, .pdopt.pdstate, .pdiparams, .pdmodel: Files related to model weights, including network parameters, optimizer, EMA (Exponential Moving Average), static graph network parameters, and static graph network structure.python main.py -c paddlex/configs/face_detection/MobileFaceNet.yaml \
-o Global.mode=evaluate \
-o Global.dataset_dir=./dataset/face_rec_examples
Similar to model training, the process involves the following steps:
* Specify the path to the `.yaml` configuration file for the modelοΌhere it's `MobileFaceNet.yaml`οΌ
* Set the mode to model evaluation: `-o Global.mode=evaluate`
* Specify the path to the validation dataset: `-o Global.dataset_dir`
Other related parameters can be configured by modifying the fields under `Global` and `Evaluate` in the `.yaml` configuration file. For detailed information, please refer to [PaddleX Common Configuration Parameters for Models](../../instructions/config_parameters_common.en.md)γ