Depth Estimation of Non-Rigid Objects For Time-Of-Flight Imaging
Depth sensing is useful for a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with low latency. However, for reasons ranging from power constraints to multi-camera interference, the frequen...
Main Authors: | , |
---|---|
Other Authors: | |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2018
|
Online Access: | http://hdl.handle.net/1721.1/119397 https://orcid.org/0000-0001-8552-7458 https://orcid.org/0000-0003-4841-3990 |
Summary: | Depth sensing is useful for a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with low latency. However, for reasons ranging from power constraints to multi-camera interference, the frequency at which accurate depth measurements can be obtained is reduced. To address this, we propose an algorithm that uses concurrently collected images to estimate the depth of non-rigid objects without using the TOF camera. Our technique models non-rigid objects as locally rigid and uses previous depth measurements along with the optical flow of the images to estimate depth. In particular, we show how we exploit the previous depth measurements to directly estimate pose and how we integrate this with our model to estimate the depth of non-rigid objects by finding the solution to a sparse linear system. We evaluate our technique on a RGB-D dataset of deformable objects, where we estimate depth with a mean relative error of 0.37% and outperform other adapted techniques. |
---|