Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module

Abstract Light‐field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super‐resolution (SR) image processing. With the benefit of machine learning, established AI‐based deep CNN models for...

Full description

Bibliographic Details
Main Authors: Ka‐Hou Chan, Sio‐Kei Im
Format: Article
Language:English
Published: Wiley 2024-01-01
Series:Electronics Letters
Subjects:
Online Access:https://doi.org/10.1049/ell2.13019
Description
Summary:Abstract Light‐field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super‐resolution (SR) image processing. With the benefit of machine learning, established AI‐based deep CNN models for LF image SR often individualize the models for various resolutions. However, the rigidity of these approaches for actual LF applications stems from the considerable diversity in angular resolution among LF instruments. Therefore, an advanced neural network proposal is required to utilize a CNN‐based model for super‐resolving LF images with different resolutions obtained from provided features. In this work, a preprocessing to calculate the depth channel from given LF information is first presented, and then a multiple‐decouple and fusion module is introduced to integrate the VGGreNet for the LF image SR, which collects global‐to‐local information according to the CNN kernel size and dynamically constructs each view through a global view module. Besides, the generated features are transformed to a uniform space to perform final fusion, achieving global alignment for precise extraction of angular information. Experimental results show that the proposed method can handle benchmark LF datasets with various angular and different resolutions, reporting the effectiveness and potential performance of the method.
ISSN:0013-5194
1350-911X