Pushed by the realities of the War on Terror, huge technical strides have been made in facial recognition systems over the past several years.
Most systems rely on a process of acquiring an image from a camera and comparing the image to a database of pictures. This works well under ideal conditions in which the acquired image is well lit and in the proper orientation to the camera.
But in poorly lit, or dark environments and when the subject is not in an ideal pose, these systems routinely fail to match the image of a known bad actor with the image in the database. The consequences of these missed opportunities are obvious and have led Army researchers in a hunt for better solutions.
Advancing the state of the art in facial recognition includes mapping thermal images to visual images and mapping images taken from different pose angles to frontal images commonly stored in watch list databases.
Army scientists have developed a method and system for face recognition when face image in the database and the acquired image are in different modalities or have different pose angles due to the fact that they are acquired by different sensors or at different times. The system converts the coordinates of the eyes and mouth of a facial image appearing in a randomly oriented photograph to virtual coordinates corresponding to an estimate of how the head and facial coordinates would appear if the subject’s head were to be turned such that the centers of the eyes and corner of the mouth were oriented in a vertical plane (the roll, pitch, and yaw are zero and the scale is one).
The estimate is obtained using models based upon a non-linear Gaussian least square differential correction. The matching is performed by matching the virtual coordinates of facial images in the acquired image against those in the database.
This method is also applied to the comparison of thermal images to database images. Since the edge information and texture information do not correspond to each other in visible and thermal imagery, only common features remain. These are biometric landmarks, such as eye locations, mouth location as mentioned above.
- Accurately matches facial images acquired by thermal cameras with common light images in a database
- Accurately matches facial images acquired from differing camera angles with images in a database
- US patent 9,875,398 available for license
- Potential for collaboration with Army researchers