Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance

Despite the tremendous progress made in learning-based depth prediction, most methods rely heavily on large amounts of dense ground-truth depth data for training. To solve the tradeoff between the labeling cost and precision, we propose a novel weakly supervised approach, namely, the Guided-Net, by...

Full description

Bibliographic Details
Main Authors: Liang Du, Jiamao Li, Xiaoqing Ye, Xiaolin Zhang
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8570753/
_version_ 1818644967443136512
author Liang Du
Jiamao Li
Xiaoqing Ye
Xiaolin Zhang
author_facet Liang Du
Jiamao Li
Xiaoqing Ye
Xiaolin Zhang
author_sort Liang Du
collection DOAJ
description Despite the tremendous progress made in learning-based depth prediction, most methods rely heavily on large amounts of dense ground-truth depth data for training. To solve the tradeoff between the labeling cost and precision, we propose a novel weakly supervised approach, namely, the Guided-Net, by incorporating robust ground control points for guidance. By exploiting the guidance from ground control points, disparity edge gradients, and image appearance constraints, our improved network with deformable convolutional layers is empowered to learn in a more efficient way. The experiments on the KITTI, Cityscapes, and Make3D datasets demonstrate that the proposed method yields a performance superior to that of the existing weakly supervised approaches and achieves results comparable to those of the semisupervised and supervised frameworks.
first_indexed 2024-12-17T00:23:16Z
format Article
id doaj.art-e612785974a143b6b6ff5915587d1774
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-17T00:23:16Z
publishDate 2019-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-e612785974a143b6b6ff5915587d17742022-12-21T22:10:31ZengIEEEIEEE Access2169-35362019-01-0175736574810.1109/ACCESS.2018.28857738570753Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for GuidanceLiang Du0https://orcid.org/0000-0002-7952-5736Jiamao Li1Xiaoqing Ye2https://orcid.org/0000-0003-3268-880XXiaolin Zhang3Bio-Vision System Laboratory, State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, ChinaBio-Vision System Laboratory, State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, ChinaBio-Vision System Laboratory, State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, ChinaBio-Vision System Laboratory, State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, ChinaDespite the tremendous progress made in learning-based depth prediction, most methods rely heavily on large amounts of dense ground-truth depth data for training. To solve the tradeoff between the labeling cost and precision, we propose a novel weakly supervised approach, namely, the Guided-Net, by incorporating robust ground control points for guidance. By exploiting the guidance from ground control points, disparity edge gradients, and image appearance constraints, our improved network with deformable convolutional layers is empowered to learn in a more efficient way. The experiments on the KITTI, Cityscapes, and Make3D datasets demonstrate that the proposed method yields a performance superior to that of the existing weakly supervised approaches and achieves results comparable to those of the semisupervised and supervised frameworks.https://ieeexplore.ieee.org/document/8570753/Computer visionstereo image processingstereo visionweakly supervised learning
spellingShingle Liang Du
Jiamao Li
Xiaoqing Ye
Xiaolin Zhang
Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
IEEE Access
Computer vision
stereo image processing
stereo vision
weakly supervised learning
title Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
title_full Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
title_fullStr Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
title_full_unstemmed Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
title_short Weakly Supervised Deep Depth Prediction Leveraging Ground Control Points for Guidance
title_sort weakly supervised deep depth prediction leveraging ground control points for guidance
topic Computer vision
stereo image processing
stereo vision
weakly supervised learning
url https://ieeexplore.ieee.org/document/8570753/
work_keys_str_mv AT liangdu weaklysuperviseddeepdepthpredictionleveraginggroundcontrolpointsforguidance
AT jiamaoli weaklysuperviseddeepdepthpredictionleveraginggroundcontrolpointsforguidance
AT xiaoqingye weaklysuperviseddeepdepthpredictionleveraginggroundcontrolpointsforguidance
AT xiaolinzhang weaklysuperviseddeepdepthpredictionleveraginggroundcontrolpointsforguidance