Navigation auf uzh.ch

Suche

Department of Informatics Artificial Intelligence and Machine Learning Group

[processed by Dominic Schmidli] Face Recognition Aspects with DNNs

Automatic face recognition has been a hot topic in research and in the recent years, face recognition algorithms have matured into a stage where they are used in many daily activities. For example, people unlock their phones using face recognition, or airports rely on automatic face recognition to enter through e-gates. All of these applications, however, rely on the comparison of high-resolution frontal faces with good illumination and a neutral facial expression without occlusion, but are prone to fail when imaging conditions deteriorate.

While deep neural networks are claimed to be more stable with respect to these adverse conditions, to my knowledge there has not been a dedicated study to test this property. We have performed a study on several traditional face recognition algorithms and tested how they react with respect to different imaging aspects, such as illumination, facial expression, resolution and face pose: Face Recognition in Challenging Environments: An Experimental and Reproducible Research Survey (and more detailed in 2D Face Recognition: An Experimental and Reproducible Research Survey). The experiments in these papers rely on the open-source Python toolbox Bob, and a source code package to reproduce the experiments is provided here. The task in this Bachelor's thesis would be to extend that study to more modern face recognition algorithms, more precisely, to pre-trained deep networks such as VGGFace2, ArcFace and/or MobileFaceNet. Of particular interest are the experiments that solely test illumination, facial expression, occlusion and face pose; experiments on other datasets (CAS-PEAL, Labeled Faces in the Wild, YouTubeFaces, MOBIO) are a surplus.

Some of the datasets used in our original experiments are freely available on the Internet, some others are proprietary or no longer accessible. However, Prof. Dr. Sébastien Marcel, my previous supervisor at the Idiap Research Institute in Martigny, still has a copy of all the datasets, and he agreed to host a student at his institution -- either physically or via online access.

Several aspects must be taken care of in this Bachelor's thesis. First, a (possibly generic) implementation of the feature extraction from the above-mentioned pre-trained deep networks needs to be implemented. I would suggest to use the framework-agnostic DNN interface of OpenCV, which can execute deep networks parallel on the CPU (and possibly also on the GPU). Second, since each of the pre-trained networks rely on a different image resolution and different content (size and position of the face in the image), separate alignment steps are required and need to be configured. Finally, the source code package to run the experiments is no longer up-to-date, so the experimental setup needs minor revisions.

Requirements

  • A reasonable understanding of deep neural networks.
  • Programming experience in python.
  • Working with the Linux command line.
  • Decent understanding of written English.