PhotoApp

<jats:p>Photorealistic editing of head portraits is a challenging task as humans are very sensitive to inconsistencies in faces. We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination (parameterised with an environment map) in a portrait image....

Full description

Bibliographic Details
Main Authors: R, Mallikarjun B, Tewari, Ayush, Dib, Abdallah, Weyrich, Tim, Bickel, Bernd, Seidel, Hans-Peter, Pfister, Hanspeter, Matusik, Wojciech, Chevallier, Louis, Elgharib, Mohamed, Theobalt, Christian
Format: Article
Language:English
Published: Association for Computing Machinery (ACM) 2021
Online Access:https://hdl.handle.net/1721.1/134206
_version_ 1811088056930795520
author R, Mallikarjun B
Tewari, Ayush
Dib, Abdallah
Weyrich, Tim
Bickel, Bernd
Seidel, Hans-Peter
Pfister, Hanspeter
Matusik, Wojciech
Chevallier, Louis
Elgharib, Mohamed
Theobalt, Christian
author_facet R, Mallikarjun B
Tewari, Ayush
Dib, Abdallah
Weyrich, Tim
Bickel, Bernd
Seidel, Hans-Peter
Pfister, Hanspeter
Matusik, Wojciech
Chevallier, Louis
Elgharib, Mohamed
Theobalt, Christian
author_sort R, Mallikarjun B
collection MIT
description <jats:p>Photorealistic editing of head portraits is a challenging task as humans are very sensitive to inconsistencies in faces. We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination (parameterised with an environment map) in a portrait image. This requires our method to capture and control the full reflectance field of the person in the image. Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages. Such datasets are expensive to acquire, not readily available and do not capture all the rich variations of in-the-wild portrait images. In addition, most supervised approaches only focus on relighting, and do not allow camera viewpoint editing. Thus, they only capture and control a subset of the reflectance field. Recently, portrait editing has been demonstrated by operating in the generative model space of StyleGAN. While such approaches do not require direct supervision, there is a significant loss of quality when compared to the supervised approaches. In this paper, we present a method which learns from limited supervised training data. The training images only include people in a fixed neutral expression with eyes closed, without much hair or background variations. Each person is captured under 150 one-light-at-a-time conditions and under 8 camera poses. Instead of training directly in the image space, we design a supervised problem which learns transformations in the latent space of StyleGAN. This combines the best of supervised learning and generative adversarial modeling. We show that the StyleGAN prior allows for generalisation to different expressions, hairstyles and backgrounds. This produces high-quality photorealistic results for in-the-wild images and significantly outperforms existing methods. Our approach can edit the illumination and pose simultaneously, and runs at interactive rates.</jats:p>
first_indexed 2024-09-23T13:55:34Z
format Article
id mit-1721.1/134206
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:55:34Z
publishDate 2021
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/1342062021-10-28T03:45:52Z PhotoApp R, Mallikarjun B Tewari, Ayush Dib, Abdallah Weyrich, Tim Bickel, Bernd Seidel, Hans-Peter Pfister, Hanspeter Matusik, Wojciech Chevallier, Louis Elgharib, Mohamed Theobalt, Christian <jats:p>Photorealistic editing of head portraits is a challenging task as humans are very sensitive to inconsistencies in faces. We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination (parameterised with an environment map) in a portrait image. This requires our method to capture and control the full reflectance field of the person in the image. Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages. Such datasets are expensive to acquire, not readily available and do not capture all the rich variations of in-the-wild portrait images. In addition, most supervised approaches only focus on relighting, and do not allow camera viewpoint editing. Thus, they only capture and control a subset of the reflectance field. Recently, portrait editing has been demonstrated by operating in the generative model space of StyleGAN. While such approaches do not require direct supervision, there is a significant loss of quality when compared to the supervised approaches. In this paper, we present a method which learns from limited supervised training data. The training images only include people in a fixed neutral expression with eyes closed, without much hair or background variations. Each person is captured under 150 one-light-at-a-time conditions and under 8 camera poses. Instead of training directly in the image space, we design a supervised problem which learns transformations in the latent space of StyleGAN. This combines the best of supervised learning and generative adversarial modeling. We show that the StyleGAN prior allows for generalisation to different expressions, hairstyles and backgrounds. This produces high-quality photorealistic results for in-the-wild images and significantly outperforms existing methods. Our approach can edit the illumination and pose simultaneously, and runs at interactive rates.</jats:p> 2021-10-27T20:03:58Z 2021-10-27T20:03:58Z 2021-08 2021-09-27T15:04:09Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/134206 en 10.1145/3450626.3459765 ACM Transactions on Graphics Creative Commons Attribution 4.0 International license https://creativecommons.org/licenses/by/4.0/ application/pdf Association for Computing Machinery (ACM) ACM
spellingShingle R, Mallikarjun B
Tewari, Ayush
Dib, Abdallah
Weyrich, Tim
Bickel, Bernd
Seidel, Hans-Peter
Pfister, Hanspeter
Matusik, Wojciech
Chevallier, Louis
Elgharib, Mohamed
Theobalt, Christian
PhotoApp
title PhotoApp
title_full PhotoApp
title_fullStr PhotoApp
title_full_unstemmed PhotoApp
title_short PhotoApp
title_sort photoapp
url https://hdl.handle.net/1721.1/134206
work_keys_str_mv AT rmallikarjunb photoapp
AT tewariayush photoapp
AT dibabdallah photoapp
AT weyrichtim photoapp
AT bickelbernd photoapp
AT seidelhanspeter photoapp
AT pfisterhanspeter photoapp
AT matusikwojciech photoapp
AT chevallierlouis photoapp
AT elgharibmohamed photoapp
AT theobaltchristian photoapp