An attention-embedded GAN for SVBRDF recovery from a single image

Abstract Learning-based approaches have made substantial progress in capturing spatially-varying bidirectional reflectance distribution functions (SVBRDFs) from a single image with unknown lighting and geometry. However, most existing networks only consider per-pixel losses which limit their capabil...

Full description

Bibliographic Details
Main Authors: Zeqi Shi, Xiangyu Lin, Ying Song
Format: Article
Language:English
Published: SpringerOpen 2023-03-01
Series:Computational Visual Media
Subjects:
Online Access:https://doi.org/10.1007/s41095-022-0289-1
Description
Summary:Abstract Learning-based approaches have made substantial progress in capturing spatially-varying bidirectional reflectance distribution functions (SVBRDFs) from a single image with unknown lighting and geometry. However, most existing networks only consider per-pixel losses which limit their capability to recover local features such as smooth glossy regions. A few generative adversarial networks use multiple discriminators for different parameter maps, increasing network complexity. We present a novel end-to-end generative adversarial network (GAN) to recover appearance from a single picture of a nearly-flat surface lit by flash. We use a single unified adversarial framework for each parameter map. An attention module guides the network to focus on details of the maps. Furthermore, the SVBRDF map loss is combined to prevent paying excess attention to specular highlights. We demonstrate and evaluate our method on both public datasets and real data. Quantitative analysis and visual comparisons indicate that our method achieves better results than the state-of-the-art in most cases.
ISSN:2096-0433
2096-0662