Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework

Predicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel...

Full description

Bibliographic Details
Main Authors: Dooseop Choi, Kyoungwook Min, Jeongdan Choi
Format: Article
Language:English
Published: Wiley 2020-08-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/iet-cvi.2019.0546
Description
Summary:Predicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel method for training this model. The proposed model is based on an encoder–decoder architecture where the encoder encodes inputs (past trajectory and scene context information), while the decoder produces a future trajectory from the context vector given by the encoder. To make the proposed model better utilise the scene context information, the authors let the encoder predict the positions in the past trajectory and a reward function evaluate the positions along with the scene context information generated by the positions. The reward function, which is simultaneously trained with the proposed model, plays the role of a regulariser for the model during the simultaneous training. The authors evaluate the proposed model on several public benchmark datasets. The experimental results show that the prediction performance of the proposed model is greatly improved by the proposed regularisation method, which outperforms the‐state‐of‐the‐art models in terms of accuracy.
ISSN:1751-9632
1751-9640