---
comments: true
---
# Face Recognition Pipeline Tutorial
## 1. Introduction to the Face Recognition Pipeline
Face recognition is a crucial component in the field of computer vision, aiming to automatically identify individuals by analyzing and comparing facial features. This task involves not only detecting faces in images but also extracting and matching facial features to find corresponding identity information in a database. Face recognition is widely used in security authentication, surveillance systems, social media, smart devices, and other scenarios.
The face recognition pipeline is an end-to-end system dedicated to solving face detection and recognition tasks. It can quickly and accurately locate face regions in images, extract facial features, and retrieve and compare them with pre-established features in a feature database to confirm identity information.
The face recognition pipeline includes a face detection module and a face feature module, with several models in each module. Which models to use can be selected based on the benchmark data below. If you prioritize model accuracy, choose models with higher accuracy; if you prioritize inference speed, choose models with faster inference; if you prioritize model size, choose models with smaller storage requirements.
Face Detection Module:
| Model | Model Download Link | AP (%) Easy/Medium/Hard |
GPU Inference Time (ms) | CPU Inference Time | Model Size (M) | Description |
|---|---|---|---|---|---|---|
| BlazeFace | Inference Model/Trained Model | 77.7/73.4/49.5 | 0.447 | A lightweight and efficient face detection model | ||
| BlazeFace-FPN-SSH | Inference Model/Trained Model | 83.2/80.5/60.5 | 52.4 | 73.2 | 0.606 | Improved BlazeFace with FPN and SSH structures |
| PicoDet_LCNet_x2_5_face | Inference Model/Trained Model | 93.7/90.7/68.1 | 33.7 | 185.1 | 28.9 | Face detection model based on PicoDet_LCNet_x2_5 |
| PP-YOLOE_plus-S_face | Inference Model/Trained Model | 93.9/91.8/79.8 | 25.8 | 159.9 | 26.5 | Face detection model based on PP-YOLOE_plus-S |
Note: The above accuracy metrics are evaluated on the WIDER-FACE validation set with an input size of 640x640. All GPU inference times are based on an NVIDIA V100 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz and FP32 precision.
Face Recognition Module:
| Model | Model Download Link | Output Feature Dimension | Acc (%) AgeDB-30/CFP-FP/LFW |
GPU Inference Time (ms) | CPU Inference Time | Model Size (M) | Description |
|---|---|---|---|---|---|---|---|
| MobileFaceNet | Inference Model/Trained Model | 128 | 96.28/96.71/99.58 | 5.7 | 101.6 | 4.1 | Face recognition model trained on MS1Mv3 based on MobileFaceNet |
| ResNet50_face | Inference Model/Trained Model | 512 | 98.12/98.56/99.77 | 8.7 | 200.7 | 87.2 | Face recognition model trained on MS1Mv3 based on ResNet50 |
Note: The above accuracy metrics are Accuracy scores measured on the AgeDB-30, CFP-FP, and LFW datasets, respectively. All GPU inference times are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.
## 2. Quick Start The pre-trained model pipelines provided by PaddleX can be quickly experienced. You can experience the effects of the face recognition pipeline online or locally using command-line or Python. ### 2.1 Online Experience Oneline Experience is not supported at the moment. ### 2.2 Local Experience > ❗ Before using the facial recognition pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the [PaddleX Installation Guide](../../../installation/installation.md). #### 2.2.1 Command Line Experience Command line experience is not supported at the moment. #### 2.2.2 Integration via Python Script Please download the [test image](https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/friends1.jpg) for testing. In the example of running this pipeline, you need to pre-build a facial feature library. You can refer to the following instructions to download the official demo data to be used for subsequent construction of the facial feature library. You can use the following command to download the demo dataset to a specified folder: ```bash cd /path/to/paddlex wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/face_demo_gallery.tar tar -xf ./face_demo_gallery.tar ``` If you wish to build a facial feature library using a private dataset, please refer to [Section 2.3: Data Organization for Building a Feature Library](#23-data-organization-for-building-a-feature-library). Afterward, you can complete the establishment of the facial feature library and quickly perform inference with the facial recognition pipeline using just a few lines of code. ```python from paddlex import create_pipeline pipeline = create_pipeline(pipeline="face_recognition") index_data = pipeline.build_index(gallery_imgs="face_demo_gallery", gallery_label="face_demo_gallery/gallery.txt") index_data.save("face_index") output = pipeline.predict("friends1.jpg", index=index_data) for res in output: res.print() res.save_to_img("./output/") ``` In the above Python script, the following steps are executed: (1) Instantiate the `create_pipeline` to create a face recognition pipeline object. The specific parameter descriptions are as follows:| Parameter | Description | Type | Default |
|---|---|---|---|
pipeline |
The name of the pipeline or the path to the pipeline configuration file. If it is the pipeline name, it must be a pipeline supported by PaddleX. | str |
None |
device |
The device for pipeline model inference. Supports: "gpu", "cpu". | str |
"gpu" |
use_hpip |
Whether to enable high-performance inference, only available when the pipeline supports high-performance inference. | bool |
False |
| Parameter | Description | Type | Default |
|---|---|---|---|
gallery_imgs |
Base library images to be added, supported formats: 1. str type representing the root directory of images, with data organization consistent with the method used when constructing the index library, refer to Section 2.3: Data Organization for Building a Feature Library; 2. [numpy.ndarray, numpy.ndarray, ..] type base library image data. |
str|list |
None |
gallery_label |
Annotation information for base library images, supported formats: 1. str type representing the path to the annotation file, with data organization consistent with the method used when constructing the feature library, refer to Section 2.3: Data Organization for Building a Feature Library; 2. [str, str, ..] type representing the annotations of base library images. |
str |
None |
| Parameter | Description | Type | Default Value |
|---|---|---|---|
save_path |
The directory to save the feature library file, e.g., drink_index. |
str |
None |
| Parameter Type | Description |
|---|---|
| Python Var | Supports directly passing in Python variables, such as image data represented by numpy.ndarray. |
str |
Supports passing in the file path of the data to be predicted, such as the local path of an image file: /root/data/img.jpg. |
str |
Supports passing in the URL of the data file to be predicted, such as the network URL of an image file: Example. |
str |
Supports passing in a local directory containing the data files to be predicted, such as the local path: /root/data/. |
dict |
Supports passing in a dictionary type, where the key needs to correspond to the specific task, such as "img" for image classification tasks, and the value of the dictionary supports the above types of data, for example: {"img": "/root/data1"}. |
list |
Supports passing in a list, where the list elements need to be the above types of data, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"], [{"img": "/root/data1"}, {"img": "/root/data2/img.jpg"}]. |
| Method | Description | Method Parameters |
|---|---|---|
| Print results to the terminal | - format_json: Boolean, whether to format the output with JSON indentation, default is True; - indent: Integer, JSON formatting setting, effective only when format_json is True, default is 4; - ensure_ascii: Boolean, JSON formatting setting, effective only when format_json is True, default is False; |
|
| save_to_json | Save results as a JSON file | - save_path: String, file path for saving; if it's a directory, the saved file name matches the input file name; - indent: Integer, JSON formatting setting, default is 4; - ensure_ascii: Boolean, JSON formatting setting, default is False; |
| save_to_img | Save results as an image file | - save_path: String, file path for saving; if it's a directory, the saved file name matches the input file name; |
| Parameter | Description | Type | Default |
|---|---|---|---|
gallery_imgs |
Base library images to be added, supported formats: 1. str type representing the root directory of images, with data organization consistent with the method used when constructing the index library, refer to Section 2.3: Data Organization for Building a Feature Library; 2. [numpy.ndarray, numpy.ndarray, ..] type base library image data. |
str|list |
None |
gallery_label |
Annotation information for base library images, supported formats: 1. str type representing the path to the annotation file, with data organization consistent with the method used when constructing the feature library, refer to Section 2.3: Data Organization for Building a Feature Library; 2. [str, str, ..] type representing the annotations of base library images. |
str|list |
None |
remove_ids |
Index numbers to be removed, supported formats: 1. str type representing the path of a txt file, with content being the IDs of indexes to be removed, one "id" per line; 2. [int, int, ..] type representing the index numbers to be removed. Only effective in remove_index. |
str|list |
None |
index |
Feature library, supported formats: 1. The path to the directory containing the feature library files (vector.index and index_info.yaml); 2. IndexData type feature library object, only effective in append_index and remove_index, representing the feature library to be modified. |
str|IndexData |
None |
index_type |
Supports HNSW32, IVF, Flat. Among them, HNSW32 offers fast retrieval speed and high accuracy, but does not support the remove_index() operation; IVF offers fast retrieval speed but relatively lower accuracy, supporting both append_index() and remove_index() operations; Flat offers lower retrieval speed but higher accuracy, supporting both append_index() and remove_index() operations. |
str |
HNSW32 |
metric_type |
Supports: IP, Inner Product; L2, Euclidean Distance. |
str |
IP |
The main operations provided by the service are as follows:
buildIndexBuild feature vector index.
POST /face-recognition-index-build
| Name | Type | Description | Required |
|---|---|---|---|
imageLabelPairs |
array |
Image-label pairs used to build the index. | Yes |
Each element in imageLabelPairs is an object with the following properties:
| Name | Type | Description |
|---|---|---|
image |
string |
The URL of the image file accessible to the service or the Base64 encoded content of the image file. |
label |
string |
Label. |
result in the response body has the following properties:| Name | Type | Description |
|---|---|---|
indexKey |
string |
The key corresponding to the index, used to identify the established index. Can be used as input for other operations. |
idMap |
object |
Mapping from vector ID to label. |
addImagesToIndexAdd images (corresponding feature vectors) to the index.
POST /face-recognition-index-add
| Name | Type | Description | Required |
|---|---|---|---|
imageLabelPairs |
array |
Image-label pairs used to build the index. | Yes |
indexKey |
string |
The key corresponding to the index. Provided by the buildIndex operation. |
Yes |
Each element in imageLabelPairs is an object with the following properties:
| Name | Type | Description |
|---|---|---|
image |
string |
The URL of the image file accessible to the service or the Base64 encoded content of the image file. |
label |
string |
Label. |
result in the response body has the following properties:| Name | Type | Description |
|---|---|---|
idMap |
object |
Mapping from vector ID to label. |
removeImagesFromIndexRemove images (corresponding feature vectors) from the index.
POST /face-recognition-index-remove
| Name | Type | Description | Required |
|---|---|---|---|
ids |
array |
IDs of vectors to be removed from the index. | Yes |
indexKey |
string |
The key corresponding to the index. Provided by the buildIndex operation. |
Yes |
result in the response body has the following properties:| Name | Type | Description |
|---|---|---|
idMap |
object |
Mapping from vector ID to label. |
inferPerform image recognition.
POST /face-recognition-infer
| Name | Type | Description | Required |
|---|---|---|---|
image |
string |
The URL of the image file accessible to the service or the Base64 encoded content of the image file. | Yes |
indexKey |
string |
The key corresponding to the index. Provided by the buildIndex operation. |
No |
result in the response body has the following properties:| Name | Type | Description |
|---|---|---|
faces |
array |
Information about detected faces. |
image |
string |
Recognition result image. The image is in JPEG format and encoded using Base64. |
Each element in faces is an object with the following properties:
| Name | Type | Description |
|---|---|---|
bbox |
array |
Face target position. The elements in the array are the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner, in order. |
recResults |
array |
Recognition results. |
score |
number |
Detection score. |
Each element in recResults is an object with the following properties:
| Name | Type | Description |
|---|---|---|
label |
string |
Label. |
score |
number |
Recognition score. |
import base64
import pprint
import sys
import requests
API_BASE_URL = "http://0.0.0.0:8080"
base_image_label_pairs = [
{"image": "./demo0.jpg", "label": "ID0"},
{"image": "./demo1.jpg", "label": "ID1"},
{"image": "./demo2.jpg", "label": "ID2"},
]
image_label_pairs_to_add = [
{"image": "./demo3.jpg", "label": "ID2"},
]
ids_to_remove = [1]
infer_image_path = "./demo4.jpg"
output_image_path = "./out.jpg"
for pair in base_image_label_pairs:
with open(pair["image"], "rb") as file:
image_bytes = file.read()
image_data = base64.b64encode(image_bytes).decode("ascii")
pair["image"] = image_data
payload = {"imageLabelPairs": base_image_label_pairs}
resp_index_build = requests.post(f"{API_BASE_URL}/face-recognition-index-build", json=payload)
if resp_index_build.status_code != 200:
print(f"Request to face-recognition-index-build failed with status code {resp_index_build}.")
pprint.pp(resp_index_build.json())
sys.exit(1)
result_index_build = resp_index_build.json()["result"]
print(f"Number of images indexed: {len(result_index_build['idMap'])}")
for pair in image_label_pairs_to_add:
with open(pair["image"], "rb") as file:
image_bytes = file.read()
image_data = base64.b64encode(image_bytes).decode("ascii")
pair["image"] = image_data
payload = {"imageLabelPairs": image_label_pairs_to_add, "indexKey": result_index_build["indexKey"]}
resp_index_add = requests.post(f"{API_BASE_URL}/face-recognition-index-add", json=payload)
if resp_index_add.status_code != 200:
print(f"Request to face-recognition-index-add failed with status code {resp_index_add}.")
pprint.pp(resp_index_add.json())
sys.exit(1)
result_index_add = resp_index_add.json()["result"]
print(f"Number of images indexed: {len(result_index_add['idMap'])}")
payload = {"ids": ids_to_remove, "indexKey": result_index_build["indexKey"]}
resp_index_remove = requests.post(f"{API_BASE_URL}/face-recognition-index-remove", json=payload)
if resp_index_remove.status_code != 200:
print(f"Request to face-recognition-index-remove failed with status code {resp_index_remove}.")
pprint.pp(resp_index_remove.json())
sys.exit(1)
result_index_remove = resp_index_remove.json()["result"]
print(f"Number of images indexed: {len(result_index_remove['idMap'])}")
with open(infer_image_path, "rb") as file:
image_bytes = file.read()
image_data = base64.b64encode(image_bytes).decode("ascii")
payload = {"image": image_data, "indexKey": result_index_build["indexKey"]}
resp_infer = requests.post(f"{API_BASE_URL}/face-recognition-infer", json=payload)
if resp_infer.status_code != 200:
print(f"Request to face-recogntion-infer failed with status code {resp_infer}.")
pprint.pp(resp_infer.json())
sys.exit(1)
result_infer = resp_infer.json()["result"]
with open(output_image_path, "wb") as file:
file.write(base64.b64decode(result_infer["image"]))
print(f"Output image saved at {output_image_path}")
print("\nDetected faces:")
pprint.pp(result_infer["faces"])