SEABIG: A Deep Learning-Based Method for Location Prediction in Pedestrian Semantic Trajectories

Pedestrian destination prediction of a user is known as an important and challenging task for LBSs (location-based services) like traffic planning and travelling recommendation. The typical method generally applies statistical model to predict the future location based on the raw trajectory. However...

Full description

Bibliographic Details
Main Authors: Wanlong Zhang, Liting Sun, Xiang Wang, Zhitao Huang, Baoguo Li
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8790746/
Description
Summary:Pedestrian destination prediction of a user is known as an important and challenging task for LBSs (location-based services) like traffic planning and travelling recommendation. The typical method generally applies statistical model to predict the future location based on the raw trajectory. However, while predicting, existing approaches fall short in accommodating long-range dependency and ignore the semantic information existing in the raw trajectory. In this paper, we proposed a method named semantics-enriched attentional BiGRU (SEABIG) to solve the two problems. Firstly, we designed a probabilistic model based on the GMM (Gaussian mixture model) to extract stopover points from the raw trajectories and annotate the semantic information on the stopover points. Then we proposed an attentional BiGRU-based trajectory prediction model, which can jointly learn the embeddings of the semantic trajectory. It not only takes the advantage of the BiGRU (Bidirectional Gated Recurrent Unit) for sequence modeling, but also gives more attention to meaningful positions that have strong correlations w.r.t. destination by applying attention mechanism. Finally, we annotate the most likely semantic on the predicted position with the probabilistic model. Extensive experiments on Beijing real datasets demonstrate that our proposed method has higher prediction accuracy.
ISSN:2169-3536