Face Recognition Grand Challenge

The Face Recognition Grand Challenge (FRGC) was conducted from May 2004 until March 2006 to promote and advance face recognition technology. The FRGC v2 database created in 2005 has had a significant impact on the development of 3D face recognition. Although many other face databases have been created since then, as of 2022, FRGC v2 continued to be used as "a standard reference database for evaluating 3D face recognition algorithms".

Overview
The Face Recognition Grand Challenge (FRGC) was a project that aimed to promote and advance face recognition technology to support existing face recognition efforts within the U.S. Government. The project ran from May 2004 to March 2006 and was open to face recognition researchers and developers in companies, academia, and research institutions. The FRGC developed new face recognition techniques and prototype systems that significantly improved performance.

The FRGC consisted of progressively difficult challenge problems, each of which included a dataset of facial images and a defined set of experiments. The challenge problems were designed to overcome one of the impediments to developing improved face recognition, which is the lack of data.

There are three main areas for improving face recognition algorithms: high-resolution images, three-dimensional (3D) face recognition, and new pre-processing techniques. Current face recognition systems are designed to work with relatively small, static facial images. In the FRGC, high-resolution images consist of facial images with an average of 250 pixels between the centers of the eyes, which is significantly higher than the 40 to 60 pixels in current images. The FRGC aims to foster the development of new algorithms that leverage the additional information present in high-resolution images.

Three-dimensional face recognition algorithms identify faces based on the 3D shape of a person’s face. Unlike current face recognition systems that are affected by changes in lighting and pose, 3D face recognition has the potential to improve performance under these conditions, as the shape of faces remains unaffected.

In recent years, advancements in computer graphics and computer vision have enabled the modeling of lighting and pose changes in facial imagery. These advances have led to the development of new algorithms that can automatically correct for lighting and pose changes before processing through a face recognition system. The pre-processing aspect of the FRGC aims to measure the impact of these new pre-processing algorithms on recognition performance.

Structure of the Face Recognition Grand Challenge
The FRGC is structured around challenge problems designed to push researchers to meet the FRGC performance goal.

There are three new aspects of the FRGC within the face recognition community. Firstly, the size of the FRGC in terms of data is noteworthy. The FRGC dataset comprises 50,000 recordings. Secondly, the complexity of the FRGC sets it apart. Unlike previous face recognition datasets that focused on still images, the FRGC encompasses three modes:


 * 1) High-resolution still images
 * 2) 3D images
 * 3) Multiple images of a person

The third new aspect is the infrastructure. The Biometric Experimentation Environment (BEE) provides the infrastructure for FRGC. BEE, an XML-based framework, describes and documents computational experiments. It enables experiment description, distribution, recording of raw results, analysis, presentation of results, and documentation in a common format. This marks the first time a computational-experimental environment has supported a challenge problem in face recognition or biometrics.

The FRGC Data Set
The FRGC data distribution consists of three parts. The first part is the FRGC dataset. The second part is the FRGC BEE. The BEE distribution includes all the datasets for performing and scoring the six experiments. The third part consists of baseline algorithms for experiments 1 through 4. With all three components, it is possible to run experiments 1 through 4, from processing raw images to producing Receiver Operating Characteristics (ROCs).

The FRGC data comprises 50,000 recordings divided into training and validation partitions. The training partition is for algorithm training, while the validation partition assesses approach performance in a laboratory setting. The validation partition includes data from 4,003 subject sessions. A subject session represents all images of a person taken during a biometric data collection, containing four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, showing full frontal facial images under two lighting conditions and two facial expressions (smiling and neutral). Uncontrolled images were taken in varying illumination conditions, such as hallways, atriums, or outdoors. Each set of uncontrolled images contains two expressions: smiling and neutral. The 3D image was captured under controlled illumination conditions and includes both range and texture images. The 3D images were acquired using a Minolta Vivid 900/910 series sensor.

The FRGC distribution consists of six experiments. In experiment 1, the gallery comprises a single controlled still image of a person, and each probe consists of a single controlled still image. Experiment 1 serves as the control experiment. Experiment 2 studies the effect of using multiple still images of a person on performance. In experiment 2, each biometric sample consists of the four controlled images of a person taken in a subject session. For example, the gallery consists of four images of each person, all taken in the same subject session. Similarly, a probe consists of four images of a person.

Experiment 3 measures the performance of 3D face recognition. In experiment 3, both the gallery and probe set consist of 3D images of a person. Experiment 4 assesses recognition performance using uncontrolled images. In experiment 4, the gallery contains a single controlled still image, and the probe set comprises a single uncontrolled still image.

Experiments 5 and 6 compare 3D and 2D images. In both experiments, the gallery consists of 3D images. In experiment 5, the probe set consists of a single controlled still image. In experiment 6, the probe set comprises a single uncontrolled still image.

Sponsors

 * Intelligence Advanced Research Projects Agency (IARPA)
 * Department of Homeland Security (DHS)
 * FBI Criminal Justice Information Services Division
 * Technical Support Working Group (TSWG)
 * National Institute of Justice