Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging

Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is powe...

Full description

Bibliographic Details
Main Authors: Noraky, James, Sze, Vivienne
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/130109
_version_ 1811088649378332672
author Noraky, James
Sze, Vivienne
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Noraky, James
Sze, Vivienne
author_sort Noraky, James
collection MIT
description Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640 × 480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%.
first_indexed 2024-09-23T14:05:20Z
format Article
id mit-1721.1/130109
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T14:05:20Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1301092022-10-01T19:08:00Z Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging Noraky, James Sze, Vivienne Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Depth sensing is useful in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras are appealing because they obtain dense depth measurements with minimal latency. However, for many battery-powered devices, the illumination source of a TOF camera is power hungry and can limit the battery life of the device. To address this issue, we present an algorithm that lowers the power for depth sensing by reducing the usage of the TOF camera and estimating depth maps using concurrently collected images. Our technique also adaptively controls the TOF camera and enables it when an accurate depth map cannot be estimated. To ensure that the overall system power for depth sensing is reduced, we design our algorithm to run on a low power embedded platform, where it outputs 640 × 480 depth maps at 30 frames per second. We evaluate our approach on several RGB-D datasets, where it produces depth maps with an overall mean relative error of 0.96% and reduces the usage of the TOF camera by 85%. When used with commercial TOF cameras, we estimate that our algorithm can lower the total power for depth sensing by up to 73%. 2021-03-09T16:52:26Z 2021-03-09T16:52:26Z 2020-06 Article http://purl.org/eprint/type/JournalArticle 1051-8215 1558-2205 https://hdl.handle.net/1721.1/130109 Noraky, James and Vivienne Sze. "Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging." IEEE Transactions on Circuits and Systems for Video Technology 30, 6 (June 2020): 1524 - 1534. © 2020 IEEE http://dx.doi.org/10.1109/tcsvt.2019.2907904 IEEE Transactions on Circuits and Systems for Video Technology Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) Prof. Sze via Phoebe Ayers
spellingShingle Noraky, James
Sze, Vivienne
Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title_full Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title_fullStr Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title_full_unstemmed Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title_short Low Power Depth Estimation of Rigid Objects for Time-of-Flight Imaging
title_sort low power depth estimation of rigid objects for time of flight imaging
url https://hdl.handle.net/1721.1/130109
work_keys_str_mv AT norakyjames lowpowerdepthestimationofrigidobjectsfortimeofflightimaging
AT szevivienne lowpowerdepthestimationofrigidobjectsfortimeofflightimaging