Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module

Abstract Light‐field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super‐resolution (SR) image processing. With the benefit of machine learning, established AI‐based deep CNN models for...

Full description

Bibliographic Details
Main Authors: Ka‐Hou Chan, Sio‐Kei Im
Format: Article
Language:English
Published: Wiley 2024-01-01
Series:Electronics Letters
Subjects:
Online Access:https://doi.org/10.1049/ell2.13019
_version_ 1797359363590979584
author Ka‐Hou Chan
Sio‐Kei Im
author_facet Ka‐Hou Chan
Sio‐Kei Im
author_sort Ka‐Hou Chan
collection DOAJ
description Abstract Light‐field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super‐resolution (SR) image processing. With the benefit of machine learning, established AI‐based deep CNN models for LF image SR often individualize the models for various resolutions. However, the rigidity of these approaches for actual LF applications stems from the considerable diversity in angular resolution among LF instruments. Therefore, an advanced neural network proposal is required to utilize a CNN‐based model for super‐resolving LF images with different resolutions obtained from provided features. In this work, a preprocessing to calculate the depth channel from given LF information is first presented, and then a multiple‐decouple and fusion module is introduced to integrate the VGGreNet for the LF image SR, which collects global‐to‐local information according to the CNN kernel size and dynamically constructs each view through a global view module. Besides, the generated features are transformed to a uniform space to perform final fusion, achieving global alignment for precise extraction of angular information. Experimental results show that the proposed method can handle benchmark LF datasets with various angular and different resolutions, reporting the effectiveness and potential performance of the method.
first_indexed 2024-03-08T15:16:29Z
format Article
id doaj.art-a530e80e900f4fcab6e65d8a96f355fc
institution Directory Open Access Journal
issn 0013-5194
1350-911X
language English
last_indexed 2024-03-08T15:16:29Z
publishDate 2024-01-01
publisher Wiley
record_format Article
series Electronics Letters
spelling doaj.art-a530e80e900f4fcab6e65d8a96f355fc2024-01-10T11:00:34ZengWileyElectronics Letters0013-51941350-911X2024-01-01601n/an/a10.1049/ell2.13019Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion moduleKa‐Hou Chan0Sio‐Kei Im1Faculty of Applied Sciences Macao Polytechnic University Macau ChinaFaculty of Applied Sciences Macao Polytechnic University Macau ChinaAbstract Light‐field (LF) images offer the potential to improve feature capture in live scenes from multiple perspectives, and also generate additional normal vectors for performing super‐resolution (SR) image processing. With the benefit of machine learning, established AI‐based deep CNN models for LF image SR often individualize the models for various resolutions. However, the rigidity of these approaches for actual LF applications stems from the considerable diversity in angular resolution among LF instruments. Therefore, an advanced neural network proposal is required to utilize a CNN‐based model for super‐resolving LF images with different resolutions obtained from provided features. In this work, a preprocessing to calculate the depth channel from given LF information is first presented, and then a multiple‐decouple and fusion module is introduced to integrate the VGGreNet for the LF image SR, which collects global‐to‐local information according to the CNN kernel size and dynamically constructs each view through a global view module. Besides, the generated features are transformed to a uniform space to perform final fusion, achieving global alignment for precise extraction of angular information. Experimental results show that the proposed method can handle benchmark LF datasets with various angular and different resolutions, reporting the effectiveness and potential performance of the method.https://doi.org/10.1049/ell2.13019adaptive signal processingimage fusionimage processingneural net architecturespatial filters
spellingShingle Ka‐Hou Chan
Sio‐Kei Im
Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
Electronics Letters
adaptive signal processing
image fusion
image processing
neural net architecture
spatial filters
title Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
title_full Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
title_fullStr Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
title_full_unstemmed Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
title_short Light‐field image super‐resolution with depth feature by multiple‐decouple and fusion module
title_sort light field image super resolution with depth feature by multiple decouple and fusion module
topic adaptive signal processing
image fusion
image processing
neural net architecture
spatial filters
url https://doi.org/10.1049/ell2.13019
work_keys_str_mv AT kahouchan lightfieldimagesuperresolutionwithdepthfeaturebymultipledecoupleandfusionmodule
AT siokeiim lightfieldimagesuperresolutionwithdepthfeaturebymultipledecoupleandfusionmodule