A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera

Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An o...

Full description

Bibliographic Details
Main Authors: Wenyan Ci, Yingping Huang
Format: Article
Language:English
Published: MDPI AG 2016-10-01
Series:Sensors
Subjects:
Online Access:http://www.mdpi.com/1424-8220/16/10/1704
_version_ 1798034933004894208
author Wenyan Ci
Yingping Huang
author_facet Wenyan Ci
Yingping Huang
author_sort Wenyan Ci
collection DOAJ
description Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.
first_indexed 2024-04-11T20:51:15Z
format Article
id doaj.art-aa00cd6a178440a889f99006ed66b242
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-04-11T20:51:15Z
publishDate 2016-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-aa00cd6a178440a889f99006ed66b2422022-12-22T04:03:50ZengMDPI AGSensors1424-82202016-10-011610170410.3390/s16101704s16101704A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo CameraWenyan Ci0Yingping Huang1School of Optical-Electrical and Computer Engineering, University of Shanghai for Science & Technology, Shanghai 200093, ChinaSchool of Optical-Electrical and Computer Engineering, University of Shanghai for Science & Technology, Shanghai 200093, ChinaVisual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method.http://www.mdpi.com/1424-8220/16/10/1704visual odometryego-motionstereovisionoptical flowRANSAC algorithmspace position constraint
spellingShingle Wenyan Ci
Yingping Huang
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Sensors
visual odometry
ego-motion
stereovision
optical flow
RANSAC algorithm
space position constraint
title A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
title_full A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
title_fullStr A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
title_full_unstemmed A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
title_short A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
title_sort robust method for ego motion estimation in urban environment using stereo camera
topic visual odometry
ego-motion
stereovision
optical flow
RANSAC algorithm
space position constraint
url http://www.mdpi.com/1424-8220/16/10/1704
work_keys_str_mv AT wenyanci arobustmethodforegomotionestimationinurbanenvironmentusingstereocamera
AT yingpinghuang arobustmethodforegomotionestimationinurbanenvironmentusingstereocamera
AT wenyanci robustmethodforegomotionestimationinurbanenvironmentusingstereocamera
AT yingpinghuang robustmethodforegomotionestimationinurbanenvironmentusingstereocamera