Real-time trajectory adaptation for quadrupedal locomotion using deep reinforcement learning

We present a control architecture for real-time adaptation and tracking of trajectories generated using a terrain-aware trajectory optimization solver. This approach enables us to circumvent the computationally exhaustive task of online trajectory optimization, and further introduces a control solut...

Full description

Bibliographic Details
Main Authors: Gangapurwala, S, Geisert, M, Orsolino, R, Fallon, M, Havoutis, I
Format: Conference item
Language:English
Published: IEEE 2021
Description
Summary:We present a control architecture for real-time adaptation and tracking of trajectories generated using a terrain-aware trajectory optimization solver. This approach enables us to circumvent the computationally exhaustive task of online trajectory optimization, and further introduces a control solution robust to systems modeled with approximated dynamics. We train a policy using deep reinforcement learning (RL) to introduce additive deviations to a reference trajectory in order to generate a feedback-based trajectory tracking system for a quadrupedal robot. We train this policy across a multitude of simulated terrains and ensure its generality by introducing training methods that avoid overfitting and convergence towards local optima. Additionally, in order to capture terrain information, we include a latent representation of the height maps in the observation space of the RL environment as a form of exteroceptive feedback. We test the performance of our trained policy by tracking the corrected set points using a model-based whole-body controller and compare it with the tracking behavior obtained without the corrective feedback in several simulation environments. We also show successful transfer of our training approach to the real physical system and further present cogent arguments in support of our framework.