Hardware Implementation of a Complete Vision-Based Navigation Pipeline
Autonomous navigation technology has made great advances, but many of the successful hardware systems are reliant on LiDAR. LiDAR is known to be expensive and to have high computation cost, while vision, which is typically used in combination with LiDAR, has additional benefits without the same cost...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/140151 |
_version_ | 1826199002059636736 |
---|---|
author | Ni, Susan |
author2 | How, Jonathan P. |
author_facet | How, Jonathan P. Ni, Susan |
author_sort | Ni, Susan |
collection | MIT |
description | Autonomous navigation technology has made great advances, but many of the successful hardware systems are reliant on LiDAR. LiDAR is known to be expensive and to have high computation cost, while vision, which is typically used in combination with LiDAR, has additional benefits without the same cost concerns. It would be ideal if vision could replace LiDAR in its entirety, and there has been extensive work on vision-based alternatives for each module of the autonomy pipeline, but there are no well-established complete vision-based navigation pipelines. This project integrates vision-based object tracking, state estimation, and collision avoidance planning modules via the Robot Operating System and implements the system on hardware. Both the state estimation module, OpenVINS, and the object tracking module, CenterTrack 2D with depth images, are benchmarked on our hardware setup and found to have within 0.2 meters of displacement error. Trials of experiments in a real environment are performed to demonstrate the complete pipeline’s ability to navigate to a goal about 8 meters away in the presence of up to 6 naturally moving pedestrians. |
first_indexed | 2024-09-23T11:13:13Z |
format | Thesis |
id | mit-1721.1/140151 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T11:13:13Z |
publishDate | 2022 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1401512022-02-08T03:45:04Z Hardware Implementation of a Complete Vision-Based Navigation Pipeline Ni, Susan How, Jonathan P. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Autonomous navigation technology has made great advances, but many of the successful hardware systems are reliant on LiDAR. LiDAR is known to be expensive and to have high computation cost, while vision, which is typically used in combination with LiDAR, has additional benefits without the same cost concerns. It would be ideal if vision could replace LiDAR in its entirety, and there has been extensive work on vision-based alternatives for each module of the autonomy pipeline, but there are no well-established complete vision-based navigation pipelines. This project integrates vision-based object tracking, state estimation, and collision avoidance planning modules via the Robot Operating System and implements the system on hardware. Both the state estimation module, OpenVINS, and the object tracking module, CenterTrack 2D with depth images, are benchmarked on our hardware setup and found to have within 0.2 meters of displacement error. Trials of experiments in a real environment are performed to demonstrate the complete pipeline’s ability to navigate to a goal about 8 meters away in the presence of up to 6 naturally moving pedestrians. M.Eng. 2022-02-07T15:27:04Z 2022-02-07T15:27:04Z 2021-09 2021-11-03T19:25:36.803Z Thesis https://hdl.handle.net/1721.1/140151 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Ni, Susan Hardware Implementation of a Complete Vision-Based Navigation Pipeline |
title | Hardware Implementation of a Complete Vision-Based
Navigation Pipeline |
title_full | Hardware Implementation of a Complete Vision-Based
Navigation Pipeline |
title_fullStr | Hardware Implementation of a Complete Vision-Based
Navigation Pipeline |
title_full_unstemmed | Hardware Implementation of a Complete Vision-Based
Navigation Pipeline |
title_short | Hardware Implementation of a Complete Vision-Based
Navigation Pipeline |
title_sort | hardware implementation of a complete vision based navigation pipeline |
url | https://hdl.handle.net/1721.1/140151 |
work_keys_str_mv | AT nisusan hardwareimplementationofacompletevisionbasednavigationpipeline |