Publications

Learning Monocular Face Reconstruction using Multi-View Supervision

Face and Gesture (FG'20)

Publication date: November 1, 2020

Zhixin Shu, Duygu Ceylan, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Dimitris Samaras

Best Paper Runner Up award

We present a method to reconstruct faces from a single portrait image. While traditional face reconstruction methods fit low-dimensional 3D morphable models to images, we train a deep network to regress depth from a single image directly. We do so by combining supervised losses on synthetic data with indirect supervision on real data using a novel multi-view photo-consistency loss. Also, we regularize the depth estimation using a 3D morphable model (3DMM). We demonstrate that this leads to results that preserve facial features, capture facial geometry that goes beyond 3DMMs, and also robust to viewpoint conditions. We evaluate our method on various datasets and via ablation studies, and demonstrate that it outperforms previous work significantly.

Learn More