Learning Steering Bounds for Parallel Autonomous Systems

Deep learning has been successfully applied to “end-to-end” learning of the autonomous driving task, where a deep neural network learns to predict steering control commands from camera data input. While these previous works support reactionary control, the representation learned is not usable...

Full description

Bibliographic Details
Main Authors: Amini, Alexander A, Paull, Liam, Balch, Thomas M, Karaman, Sertac, Rus, Daniela L
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:en_US
Published: Institute of Electrical and Electronics Engineers (IEEE) 2018
Online Access:http://hdl.handle.net/1721.1/117632
https://orcid.org/0000-0002-9673-1267
https://orcid.org/0000-0003-2492-6660
https://orcid.org/0000-0002-2225-7275
https://orcid.org/0000-0001-5473-3566
_version_ 1826212471699931136
author Amini, Alexander A
Paull, Liam
Balch, Thomas M
Karaman, Sertac
Rus, Daniela L
author2 Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
author_facet Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Amini, Alexander A
Paull, Liam
Balch, Thomas M
Karaman, Sertac
Rus, Daniela L
author_sort Amini, Alexander A
collection MIT
description Deep learning has been successfully applied to “end-to-end” learning of the autonomous driving task, where a deep neural network learns to predict steering control commands from camera data input. While these previous works support reactionary control, the representation learned is not usable for higher-level decision making required for autonomous navigation. This paper tackles the problem of learning a representation to predict a continuous control probability distribution, and thus steering control options and bounds for those options, which can be used for autonomous navigation. Each mode of the distribution encodes a possible macro-action that the system could execute at that instant, and the covariances of the modes place bounds on safe steering control values. Our approach has the added advantage of being trained on unlabeled data collected from inexpensive cameras. The deep neural network based algorithm generates a probability distribution over the space of steering angles, from which we leverage Variational Bayesian methods to extract a mixture model and compute the different possible actions in the environment. A bound, which the autonomous vehicle must respect in our parallel autonomy setting, is then computed for each of these actions. We evaluate our approach on a challenging dataset containing a wide variety of driving conditions, and show that our algorithm is capable of parameterizing Gaussian Mixture Models for possible actions, and extract steering bounds with a mean error of only 2 degrees. Additionally, we demonstrate our system working on a full scale autonomous vehicle and evaluate its ability to successful handle various different parallel autonomy situations.
first_indexed 2024-09-23T15:22:34Z
format Article
id mit-1721.1/117632
institution Massachusetts Institute of Technology
language en_US
last_indexed 2024-09-23T15:22:34Z
publishDate 2018
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1176322022-09-29T14:32:39Z Learning Steering Bounds for Parallel Autonomous Systems Amini, Alexander A Paull, Liam Balch, Thomas M Karaman, Sertac Rus, Daniela L Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology. Department of Mathematics Massachusetts Institute of Technology. Department of Mechanical Engineering Amini, Alexander Amini, Alexander A Paull, Liam Balch, Thomas M Karaman, Sertac Rus, Daniela L Deep learning has been successfully applied to “end-to-end” learning of the autonomous driving task, where a deep neural network learns to predict steering control commands from camera data input. While these previous works support reactionary control, the representation learned is not usable for higher-level decision making required for autonomous navigation. This paper tackles the problem of learning a representation to predict a continuous control probability distribution, and thus steering control options and bounds for those options, which can be used for autonomous navigation. Each mode of the distribution encodes a possible macro-action that the system could execute at that instant, and the covariances of the modes place bounds on safe steering control values. Our approach has the added advantage of being trained on unlabeled data collected from inexpensive cameras. The deep neural network based algorithm generates a probability distribution over the space of steering angles, from which we leverage Variational Bayesian methods to extract a mixture model and compute the different possible actions in the environment. A bound, which the autonomous vehicle must respect in our parallel autonomy setting, is then computed for each of these actions. We evaluate our approach on a challenging dataset containing a wide variety of driving conditions, and show that our algorithm is capable of parameterizing Gaussian Mixture Models for possible actions, and extract steering bounds with a mean error of only 2 degrees. Additionally, we demonstrate our system working on a full scale autonomous vehicle and evaluate its ability to successful handle various different parallel autonomy situations. Toyota Research Institute 2018-09-05T11:56:52Z 2018-09-05T11:56:52Z 2018-05 Article http://purl.org/eprint/type/ConferencePaper http://hdl.handle.net/1721.1/117632 Amini, Alexander, Liam Paull, Thomas Balch, Sertac Karaman and Daniela Rus. "Learning Steering Bounds for Parallel Autonomous Systems." 2018 IEEE International Conference Robotics and Automation (ICRA), 21-26 May 2018, Brisbane, Australia. https://orcid.org/0000-0002-9673-1267 https://orcid.org/0000-0003-2492-6660 https://orcid.org/0000-0002-2225-7275 https://orcid.org/0000-0001-5473-3566 en_US https://icra2018.org/ 2018 IEEE International Conference Robotics and Automation (ICRA) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) Amini
spellingShingle Amini, Alexander A
Paull, Liam
Balch, Thomas M
Karaman, Sertac
Rus, Daniela L
Learning Steering Bounds for Parallel Autonomous Systems
title Learning Steering Bounds for Parallel Autonomous Systems
title_full Learning Steering Bounds for Parallel Autonomous Systems
title_fullStr Learning Steering Bounds for Parallel Autonomous Systems
title_full_unstemmed Learning Steering Bounds for Parallel Autonomous Systems
title_short Learning Steering Bounds for Parallel Autonomous Systems
title_sort learning steering bounds for parallel autonomous systems
url http://hdl.handle.net/1721.1/117632
https://orcid.org/0000-0002-9673-1267
https://orcid.org/0000-0003-2492-6660
https://orcid.org/0000-0002-2225-7275
https://orcid.org/0000-0001-5473-3566
work_keys_str_mv AT aminialexandera learningsteeringboundsforparallelautonomoussystems
AT paullliam learningsteeringboundsforparallelautonomoussystems
AT balchthomasm learningsteeringboundsforparallelautonomoussystems
AT karamansertac learningsteeringboundsforparallelautonomoussystems
AT rusdanielal learningsteeringboundsforparallelautonomoussystems