A survey for light field super-resolution

Compared to 2D imaging data, the 4D light field (LF) data retains richer scene’s structure information, which can significantly improve the computer’s perception capability, including depth estimation, semantic segmentation, and LF rendering. However, there is a contradiction between spatial and ang...

Full description

Bibliographic Details
Main Authors: Mingyuan Zhao, Hao Sheng, Da Yang, Sizhe Wang, Ruixuan Cong, Zhenglong Cui, Rongshan Chen, Tun Wang, Shuai Wang, Yang Huang, Jiahao Shen
Format: Article
Language:English
Published: Elsevier 2024-03-01
Series:High-Confidence Computing
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2667295224000096
Description
Summary:Compared to 2D imaging data, the 4D light field (LF) data retains richer scene’s structure information, which can significantly improve the computer’s perception capability, including depth estimation, semantic segmentation, and LF rendering. However, there is a contradiction between spatial and angular resolution during the LF image acquisition period. To overcome the above problem, researchers have gradually focused on the light field super-resolution (LFSR). In the traditional solutions, researchers achieved the LFSR based on various optimization frameworks, such as Bayesian and Gaussian models. Deep learning-based methods are more popular than conventional methods because they have better performance and more robust generalization capabilities. In this paper, the present approach can mainly divided into conventional methods and deep learning-based methods. We discuss these two branches in light field spatial super-resolution (LFSSR), light field angular super-resolution (LFASR), and light field spatial and angular super-resolution (LFSASR), respectively. Subsequently, this paper also introduces the primary public datasets and analyzes the performance of the prevalent approaches on these datasets. Finally, we discuss the potential innovations of the LFSR to propose the progress of our research field.
ISSN:2667-2952