Attention-based encoder–decoder network for depth estimation from color-coded light fields
Compressive light field cameras have attracted notable attention over the past few years because they can efficiently determine redundancy from light fields. However, much of the research has only concentrated on reconstructing the entire light field from compressed sampling, which ignores the possi...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
AIP Publishing LLC
2023-03-01
|
Series: | AIP Advances |
Online Access: | http://dx.doi.org/10.1063/5.0140530 |
_version_ | 1797771937253949440 |
---|---|
author | Hao Sheng Kun Cheng Xiaokang Jin Tian Han Xiaolin Jiang Changchun Dong |
author_facet | Hao Sheng Kun Cheng Xiaokang Jin Tian Han Xiaolin Jiang Changchun Dong |
author_sort | Hao Sheng |
collection | DOAJ |
description | Compressive light field cameras have attracted notable attention over the past few years because they can efficiently determine redundancy from light fields. However, much of the research has only concentrated on reconstructing the entire light field from compressed sampling, which ignores the possibility of directly extracting information such as depth from it. In this paper, we introduce a light field camera configuration with a random color-coded microlens array. Considering the color-coded light fields, we propose a novel attention-based encoder–decoder network. Specifically, the encoder part compresses the coded measurement into a low-dimensional representation that removes most redundancy, and the decoder part constructs the depth map directly from the latent representation. The attention mechanism enables the network to process spatial and angular features dynamically and effectively, thus significantly improving performance. Extensive experiments on synthetic and real-world datasets show that our method outperforms the state-of-the-art light field depth estimation method designed for non-coded light fields. To our knowledge, this is the first study that combines the color-coded light field with the attention-based deep learning approach, which provides a crucial insight into the design of enhanced light field photography systems. |
first_indexed | 2024-03-12T21:44:50Z |
format | Article |
id | doaj.art-ec66203bac454ad8a1a723cf882f1167 |
institution | Directory Open Access Journal |
issn | 2158-3226 |
language | English |
last_indexed | 2024-03-12T21:44:50Z |
publishDate | 2023-03-01 |
publisher | AIP Publishing LLC |
record_format | Article |
series | AIP Advances |
spelling | doaj.art-ec66203bac454ad8a1a723cf882f11672023-07-26T14:03:58ZengAIP Publishing LLCAIP Advances2158-32262023-03-01133035118035118-1110.1063/5.0140530Attention-based encoder–decoder network for depth estimation from color-coded light fieldsHao Sheng0Kun Cheng1Xiaokang Jin2Tian Han3Xiaolin Jiang4Changchun Dong5Artificial Intelligence Laboratory, Jinhua Advanced Research Institute, Jinhua 321013, People’s Republic of ChinaMechatronics Engineering College, Jinhua Polytechnic, Jinhua 321016, People’s Republic of ChinaCyberspace Security Laboratory, Jinhua Advanced Research Institute, Jinhua 321013, People’s Republic of ChinaArtificial Intelligence Laboratory, Jinhua Advanced Research Institute, Jinhua 321013, People’s Republic of ChinaArtificial Intelligence Laboratory, Jinhua Advanced Research Institute, Jinhua 321013, People’s Republic of ChinaArtificial Intelligence Laboratory, Jinhua Advanced Research Institute, Jinhua 321013, People’s Republic of ChinaCompressive light field cameras have attracted notable attention over the past few years because they can efficiently determine redundancy from light fields. However, much of the research has only concentrated on reconstructing the entire light field from compressed sampling, which ignores the possibility of directly extracting information such as depth from it. In this paper, we introduce a light field camera configuration with a random color-coded microlens array. Considering the color-coded light fields, we propose a novel attention-based encoder–decoder network. Specifically, the encoder part compresses the coded measurement into a low-dimensional representation that removes most redundancy, and the decoder part constructs the depth map directly from the latent representation. The attention mechanism enables the network to process spatial and angular features dynamically and effectively, thus significantly improving performance. Extensive experiments on synthetic and real-world datasets show that our method outperforms the state-of-the-art light field depth estimation method designed for non-coded light fields. To our knowledge, this is the first study that combines the color-coded light field with the attention-based deep learning approach, which provides a crucial insight into the design of enhanced light field photography systems.http://dx.doi.org/10.1063/5.0140530 |
spellingShingle | Hao Sheng Kun Cheng Xiaokang Jin Tian Han Xiaolin Jiang Changchun Dong Attention-based encoder–decoder network for depth estimation from color-coded light fields AIP Advances |
title | Attention-based encoder–decoder network for depth estimation from color-coded light fields |
title_full | Attention-based encoder–decoder network for depth estimation from color-coded light fields |
title_fullStr | Attention-based encoder–decoder network for depth estimation from color-coded light fields |
title_full_unstemmed | Attention-based encoder–decoder network for depth estimation from color-coded light fields |
title_short | Attention-based encoder–decoder network for depth estimation from color-coded light fields |
title_sort | attention based encoder decoder network for depth estimation from color coded light fields |
url | http://dx.doi.org/10.1063/5.0140530 |
work_keys_str_mv | AT haosheng attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields AT kuncheng attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields AT xiaokangjin attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields AT tianhan attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields AT xiaolinjiang attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields AT changchundong attentionbasedencoderdecodernetworkfordepthestimationfromcolorcodedlightfields |