Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework

Predicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel...

Full description

Bibliographic Details
Main Authors: Dooseop Choi, Kyoungwook Min, Jeongdan Choi
Format: Article
Language:English
Published: Wiley 2020-08-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/iet-cvi.2019.0546
_version_ 1797684759076274176
author Dooseop Choi
Kyoungwook Min
Jeongdan Choi
author_facet Dooseop Choi
Kyoungwook Min
Jeongdan Choi
author_sort Dooseop Choi
collection DOAJ
description Predicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel method for training this model. The proposed model is based on an encoder–decoder architecture where the encoder encodes inputs (past trajectory and scene context information), while the decoder produces a future trajectory from the context vector given by the encoder. To make the proposed model better utilise the scene context information, the authors let the encoder predict the positions in the past trajectory and a reward function evaluate the positions along with the scene context information generated by the positions. The reward function, which is simultaneously trained with the proposed model, plays the role of a regulariser for the model during the simultaneous training. The authors evaluate the proposed model on several public benchmark datasets. The experimental results show that the prediction performance of the proposed model is greatly improved by the proposed regularisation method, which outperforms the‐state‐of‐the‐art models in terms of accuracy.
first_indexed 2024-03-12T00:34:24Z
format Article
id doaj.art-b6c84e5e2d6a4604a9e31962d0924f90
institution Directory Open Access Journal
issn 1751-9632
1751-9640
language English
last_indexed 2024-03-12T00:34:24Z
publishDate 2020-08-01
publisher Wiley
record_format Article
series IET Computer Vision
spelling doaj.art-b6c84e5e2d6a4604a9e31962d0924f902023-09-15T10:06:15ZengWileyIET Computer Vision1751-96321751-96402020-08-0114519220010.1049/iet-cvi.2019.0546Regularising neural networks for future trajectory prediction via inverse reinforcement learning frameworkDooseop Choi0Kyoungwook Min1Jeongdan Choi2Artificial Intelligence Research Laboratory, ETRIDaejeonRepublic of KoreaArtificial Intelligence Research Laboratory, ETRIDaejeonRepublic of KoreaArtificial Intelligence Research Laboratory, ETRIDaejeonRepublic of KoreaPredicting distant future trajectories of agents in a dynamic scene is challenging because the future trajectory of an agent is affected not only by their past trajectory but also the scene contexts. To tackle this problem, the authors propose a model based on recurrent neural networks, and a novel method for training this model. The proposed model is based on an encoder–decoder architecture where the encoder encodes inputs (past trajectory and scene context information), while the decoder produces a future trajectory from the context vector given by the encoder. To make the proposed model better utilise the scene context information, the authors let the encoder predict the positions in the past trajectory and a reward function evaluate the positions along with the scene context information generated by the positions. The reward function, which is simultaneously trained with the proposed model, plays the role of a regulariser for the model during the simultaneous training. The authors evaluate the proposed model on several public benchmark datasets. The experimental results show that the prediction performance of the proposed model is greatly improved by the proposed regularisation method, which outperforms the‐state‐of‐the‐art models in terms of accuracy.https://doi.org/10.1049/iet-cvi.2019.0546future trajectory predictioninverse reinforcement learning frameworkdynamic scenescene contextsrecurrent neural networksencoder–decoder architecture
spellingShingle Dooseop Choi
Kyoungwook Min
Jeongdan Choi
Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
IET Computer Vision
future trajectory prediction
inverse reinforcement learning framework
dynamic scene
scene contexts
recurrent neural networks
encoder–decoder architecture
title Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
title_full Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
title_fullStr Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
title_full_unstemmed Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
title_short Regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
title_sort regularising neural networks for future trajectory prediction via inverse reinforcement learning framework
topic future trajectory prediction
inverse reinforcement learning framework
dynamic scene
scene contexts
recurrent neural networks
encoder–decoder architecture
url https://doi.org/10.1049/iet-cvi.2019.0546
work_keys_str_mv AT dooseopchoi regularisingneuralnetworksforfuturetrajectorypredictionviainversereinforcementlearningframework
AT kyoungwookmin regularisingneuralnetworksforfuturetrajectorypredictionviainversereinforcementlearningframework
AT jeongdanchoi regularisingneuralnetworksforfuturetrajectorypredictionviainversereinforcementlearningframework