Cross-modal deep face normals with deactivable skip connections
We present an approach for estimating surface normals from in-the-wild color images of faces. While data-driven strategies have been proposed for single face images, limited available ground truth data makes this problem difficult. To alleviate this issue, we propose a method that can leverage all a...
Huvudupphovsmän: | , , , |
---|---|
Materialtyp: | Conference item |
Språk: | English |
Publicerad: |
Institute of Electrical and Electronics Engineers
2020
|
_version_ | 1826276069827674112 |
---|---|
author | Abrevaya, VF Boukhayma, A Torr, PHS Boyer, E |
author_facet | Abrevaya, VF Boukhayma, A Torr, PHS Boyer, E |
author_sort | Abrevaya, VF |
collection | OXFORD |
description | We present an approach for estimating surface normals from in-the-wild color images of faces. While data-driven strategies have been proposed for single face images, limited available ground truth data makes this problem difficult. To alleviate this issue, we propose a method that can leverage all available image and normal data, whether paired or not, thanks to a novel cross-modal learning architecture. In particular, we enable additional training with single modality data, either color or normal, by using two encoder-decoder networks with a shared latent space. The proposed architecture also enables face details to be transferred between the image and normal domains, given paired data, through skip connections between the image encoder and normal decoder. Core to our approach is a novel module that we call deactivable skip connections, which allows integrating both the auto-encoded and image-to-normal branches within the same architecture that can be trained end-to-end. This allows learning of a rich latent space that can accurately capture the normal information. We compare against state-of-the-art methods and show that our approach can achieve significant improvements, both quantitative and qualitative, with natural face images. |
first_indexed | 2024-03-06T23:08:27Z |
format | Conference item |
id | oxford-uuid:64a89f87-84a8-4ec4-89c4-0ebcdb51ebb7 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T23:08:27Z |
publishDate | 2020 |
publisher | Institute of Electrical and Electronics Engineers |
record_format | dspace |
spelling | oxford-uuid:64a89f87-84a8-4ec4-89c4-0ebcdb51ebb72022-03-26T18:20:18ZCross-modal deep face normals with deactivable skip connectionsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:64a89f87-84a8-4ec4-89c4-0ebcdb51ebb7EnglishSymplectic ElementsInstitute of Electrical and Electronics Engineers2020Abrevaya, VFBoukhayma, ATorr, PHSBoyer, EWe present an approach for estimating surface normals from in-the-wild color images of faces. While data-driven strategies have been proposed for single face images, limited available ground truth data makes this problem difficult. To alleviate this issue, we propose a method that can leverage all available image and normal data, whether paired or not, thanks to a novel cross-modal learning architecture. In particular, we enable additional training with single modality data, either color or normal, by using two encoder-decoder networks with a shared latent space. The proposed architecture also enables face details to be transferred between the image and normal domains, given paired data, through skip connections between the image encoder and normal decoder. Core to our approach is a novel module that we call deactivable skip connections, which allows integrating both the auto-encoded and image-to-normal branches within the same architecture that can be trained end-to-end. This allows learning of a rich latent space that can accurately capture the normal information. We compare against state-of-the-art methods and show that our approach can achieve significant improvements, both quantitative and qualitative, with natural face images. |
spellingShingle | Abrevaya, VF Boukhayma, A Torr, PHS Boyer, E Cross-modal deep face normals with deactivable skip connections |
title | Cross-modal deep face normals with deactivable skip connections |
title_full | Cross-modal deep face normals with deactivable skip connections |
title_fullStr | Cross-modal deep face normals with deactivable skip connections |
title_full_unstemmed | Cross-modal deep face normals with deactivable skip connections |
title_short | Cross-modal deep face normals with deactivable skip connections |
title_sort | cross modal deep face normals with deactivable skip connections |
work_keys_str_mv | AT abrevayavf crossmodaldeepfacenormalswithdeactivableskipconnections AT boukhaymaa crossmodaldeepfacenormalswithdeactivableskipconnections AT torrphs crossmodaldeepfacenormalswithdeactivableskipconnections AT boyere crossmodaldeepfacenormalswithdeactivableskipconnections |