Stefanos Zafeiriou is currently a Professor in Machine Learning and Computer Vision with the Department of Computing, Imperial College London, London, U.K, and an EPSRC Early Career Research Fellow. Between 2016-2020 he was also a Distinguishing Research Fellow with the University of Oulu under Finish Distinguishing Professor Programme. He was a recipient of the Prestigious Junior Research Fellowships from Imperial College London in 2011. He was the recipient of the President’s Medal for Excellence in Research Supervision for 2016. He served Associate Editor and Guest Editor in various journals including IEEE Trans. Pattern Analysis and Machine Intelligence, International Journal of Computer Vision, IEEE Transactions on Affective Computing, Computer Vision and Image Understanding, IEEE Transactions on Cybernetics the Image and Vision Computing Journal. He has been a Guest Editor of 8+ journal special issues and co-organised over 16 workshops/special sessions on specialised computer vision topics in top venues, such as CVPR/FG/ICCV/ECCV (including three very successfully challenges run in ICCV’13, ICCV’15 and CVPR’17 on facial landmark localisation/tracking). He has co-authored 70+ journal papers mainly on novel statistical machine learning methodologies applied to computer vision problems, such as 2-D/3-D face analysis, deformable object fitting and tracking, shape from shading, and human behaviour analysis, published in the most prestigious journals in his field of research, such as the IEEE T-PAMI, the International Journal of Computer Vision, and many papers in top conferences, such as CVPR, ICCV, ECCV, ICML. His students are frequent recipients of very prestigious and highly competitive fellowships, such as the Google Fellowship x2, the Intel Fellowship, and the Qualcomm Fellowship x4. He has more than 12K+ citations to his work, h-index 54. He was the General Chair of BMVC 2017. He was co-founder of two startups Facesoft and Ariel AI. His work has been recently showcased in Science http://www.sciencemag.org/news/2017/05/computer-scientists-have-created-most-accurate-digital-model-human-face-here-s-what-it.
Generating and Reconstructing Digital Humans
The past few years with the advent of Deep Convolutional Neural Networks (DCNNs), as well as the availability of visual data it was shown that it is possible to produce excellent results in very challenging tasks, such as visual object recognition, detection, tracking, etc. Nevertheless, in certain tasks such as fine-grain object recognition (e.g., face recognition) it is very difficult to collect the amount of data that is needed. In this talk, I will show how, using a special category of Generative Adversarial Networks (GANs), we can generate highly realistic faces and heads and use them for training algorithms such as face and facial expression recognition. Next, I will reverse the problem and demonstrate how by having trained a very powerful face recognition network it can be used to perform very accurate 3D shape and texture reconstruction of faces from a single image. I will further show how to create production-ready human heads, as well as single-shot head-to-head translations using translation networks. Finally, I will touch upon how the generation of 3D human fittings can aid in performing detailed 3D face flow estimation, as well as other tasks as 3D dense human body/hand and pose estimation by capitalizing upon intrinsic mesh convolutions.