Depth Map Super-Resolution Using Guided Deformable Convolution
Depth maps acquired by low-cost sensors have low spatial resolution, which restricts their usefulness in many image processing and computer vision tasks. To increase the spatial resolution of the depth map, most state-of-the-art depth map super-resolution methods based on deep learning extract the f...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9420066/ |
_version_ | 1823943836261416960 |
---|---|
author | Joon-Yeon Kim Seowon Ji Seung-Jin Baek Seung-Won Jung Sung-Jea Ko |
author_facet | Joon-Yeon Kim Seowon Ji Seung-Jin Baek Seung-Won Jung Sung-Jea Ko |
author_sort | Joon-Yeon Kim |
collection | DOAJ |
description | Depth maps acquired by low-cost sensors have low spatial resolution, which restricts their usefulness in many image processing and computer vision tasks. To increase the spatial resolution of the depth map, most state-of-the-art depth map super-resolution methods based on deep learning extract the features from a high-resolution guidance image and concatenate them with the features from the depth map. However, such simple concatenation can transfer unnecessary textures, known as texture copying artifacts, of the guidance image to the depth map. To address this problem, we propose a novel depth map super-resolution method using guided deformable convolution. Unlike standard deformable convolution, guided deformable convolution obtains 2D kernel offsets of the depth features from the guidance features. Because the guidance features are not explicitly concatenated with the depth features but are used only to determine the kernel offsets for the depth features, the proposed method can significantly alleviate the texture copying artifacts in the resultant depth map. Experimental results show that the proposed method outperforms the state-of-the-art methods in terms of qualitative and quantitative evaluations. |
first_indexed | 2024-12-17T08:03:30Z |
format | Article |
id | doaj.art-5aee57502983453facbe86c00ce491c1 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-17T08:03:30Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-5aee57502983453facbe86c00ce491c12022-12-21T21:57:28ZengIEEEIEEE Access2169-35362021-01-019666266663510.1109/ACCESS.2021.30768539420066Depth Map Super-Resolution Using Guided Deformable ConvolutionJoon-Yeon Kim0https://orcid.org/0000-0003-1648-8322Seowon Ji1https://orcid.org/0000-0001-8700-8440Seung-Jin Baek2https://orcid.org/0000-0003-0494-2372Seung-Won Jung3https://orcid.org/0000-0002-0319-4467Sung-Jea Ko4https://orcid.org/0000-0002-4875-7091School of Electrical Engineering, Korea University, Seoul, South KoreaSchool of Electrical Engineering, Korea University, Seoul, South KoreaSchool of Electrical Engineering, Korea University, Seoul, South KoreaSchool of Electrical Engineering, Korea University, Seoul, South KoreaSchool of Electrical Engineering, Korea University, Seoul, South KoreaDepth maps acquired by low-cost sensors have low spatial resolution, which restricts their usefulness in many image processing and computer vision tasks. To increase the spatial resolution of the depth map, most state-of-the-art depth map super-resolution methods based on deep learning extract the features from a high-resolution guidance image and concatenate them with the features from the depth map. However, such simple concatenation can transfer unnecessary textures, known as texture copying artifacts, of the guidance image to the depth map. To address this problem, we propose a novel depth map super-resolution method using guided deformable convolution. Unlike standard deformable convolution, guided deformable convolution obtains 2D kernel offsets of the depth features from the guidance features. Because the guidance features are not explicitly concatenated with the depth features but are used only to determine the kernel offsets for the depth features, the proposed method can significantly alleviate the texture copying artifacts in the resultant depth map. Experimental results show that the proposed method outperforms the state-of-the-art methods in terms of qualitative and quantitative evaluations.https://ieeexplore.ieee.org/document/9420066/Convolutional neural networkdepth mapsuper-resolution |
spellingShingle | Joon-Yeon Kim Seowon Ji Seung-Jin Baek Seung-Won Jung Sung-Jea Ko Depth Map Super-Resolution Using Guided Deformable Convolution IEEE Access Convolutional neural network depth map super-resolution |
title | Depth Map Super-Resolution Using Guided Deformable Convolution |
title_full | Depth Map Super-Resolution Using Guided Deformable Convolution |
title_fullStr | Depth Map Super-Resolution Using Guided Deformable Convolution |
title_full_unstemmed | Depth Map Super-Resolution Using Guided Deformable Convolution |
title_short | Depth Map Super-Resolution Using Guided Deformable Convolution |
title_sort | depth map super resolution using guided deformable convolution |
topic | Convolutional neural network depth map super-resolution |
url | https://ieeexplore.ieee.org/document/9420066/ |
work_keys_str_mv | AT joonyeonkim depthmapsuperresolutionusingguideddeformableconvolution AT seowonji depthmapsuperresolutionusingguideddeformableconvolution AT seungjinbaek depthmapsuperresolutionusingguideddeformableconvolution AT seungwonjung depthmapsuperresolutionusingguideddeformableconvolution AT sungjeako depthmapsuperresolutionusingguideddeformableconvolution |