Fast Trajectory Prediction Method With Attention Enhanced SRU
LSTM (Long-short Term Memory) is an effective method for trajectory prediction. However, it needs to rely on the state value of the previous unit when calculating the state value of neurons in the hidden layer, which results in too long training time and prediction time. To solve this problem, we pr...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9247208/ |
id |
doaj-960962d284954fb5b67055f57776ff24 |
---|---|
record_format |
Article |
spelling |
doaj-960962d284954fb5b67055f57776ff242021-03-30T04:17:57ZengIEEEIEEE Access2169-35362020-01-01820661420662110.1109/ACCESS.2020.30357049247208Fast Trajectory Prediction Method With Attention Enhanced SRUYadong Li0https://orcid.org/0000-0003-0412-5858Bailong Liu1https://orcid.org/0000-0001-5112-7720Lei Zhang2https://orcid.org/0000-0003-2067-8719Susong Yang3Changxing Shao4https://orcid.org/0000-0002-2181-3567Dan Son5School of Information Science and Engineering, Zaozhuang University, Zaozhuang, ChinaSchool of Computer Science, China University of Mining and Technology, Xuzhou, ChinaSchool of Computer Science, China University of Mining and Technology, Xuzhou, ChinaOperating Branch, Ningbo Rail Transit Group Company Ltd., Ningbo, ChinaSchool of Computer Science, China University of Mining and Technology, Xuzhou, ChinaSchool of Information Science and Engineering, Zaozhuang University, Zaozhuang, ChinaLSTM (Long-short Term Memory) is an effective method for trajectory prediction. However, it needs to rely on the state value of the previous unit when calculating the state value of neurons in the hidden layer, which results in too long training time and prediction time. To solve this problem, we propose Fast Trajectory Prediction method with Attention enhanced SRU (FTP-AS). Firstly, we devise an SRU (Simple Recurrent Units) based trajectory prediction method. It removes the dependencies on the hidden layer state at the previous moment, and enables the model to perform better parallel calculation, speeding up model training and prediction. However, each unit of the SRU calculates the state value at each moment independently, ignoring the timing relationship between the track points and leading to accuracy decrease. Secondly, we develop the attention mechanism to enhance SRU. The influence weight for selective learning is gained by calculating the matching degree of the hidden layer state value at each moment to improve the accuracy of the prediction. Finally, experimental results on MTA bus data set and Porto taxi data set showed that FTP-AS was 3.4 times faster and about 1.7% more accurate than the traditional LSTM method.https://ieeexplore.ieee.org/document/9247208/Simple recurrent unitsattention mechanismtrajectory prediction |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yadong Li Bailong Liu Lei Zhang Susong Yang Changxing Shao Dan Son |
spellingShingle |
Yadong Li Bailong Liu Lei Zhang Susong Yang Changxing Shao Dan Son Fast Trajectory Prediction Method With Attention Enhanced SRU IEEE Access Simple recurrent units attention mechanism trajectory prediction |
author_facet |
Yadong Li Bailong Liu Lei Zhang Susong Yang Changxing Shao Dan Son |
author_sort |
Yadong Li |
title |
Fast Trajectory Prediction Method With Attention Enhanced SRU |
title_short |
Fast Trajectory Prediction Method With Attention Enhanced SRU |
title_full |
Fast Trajectory Prediction Method With Attention Enhanced SRU |
title_fullStr |
Fast Trajectory Prediction Method With Attention Enhanced SRU |
title_full_unstemmed |
Fast Trajectory Prediction Method With Attention Enhanced SRU |
title_sort |
fast trajectory prediction method with attention enhanced sru |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
LSTM (Long-short Term Memory) is an effective method for trajectory prediction. However, it needs to rely on the state value of the previous unit when calculating the state value of neurons in the hidden layer, which results in too long training time and prediction time. To solve this problem, we propose Fast Trajectory Prediction method with Attention enhanced SRU (FTP-AS). Firstly, we devise an SRU (Simple Recurrent Units) based trajectory prediction method. It removes the dependencies on the hidden layer state at the previous moment, and enables the model to perform better parallel calculation, speeding up model training and prediction. However, each unit of the SRU calculates the state value at each moment independently, ignoring the timing relationship between the track points and leading to accuracy decrease. Secondly, we develop the attention mechanism to enhance SRU. The influence weight for selective learning is gained by calculating the matching degree of the hidden layer state value at each moment to improve the accuracy of the prediction. Finally, experimental results on MTA bus data set and Porto taxi data set showed that FTP-AS was 3.4 times faster and about 1.7% more accurate than the traditional LSTM method. |
topic |
Simple recurrent units attention mechanism trajectory prediction |
url |
https://ieeexplore.ieee.org/document/9247208/ |
work_keys_str_mv |
AT yadongli fasttrajectorypredictionmethodwithattentionenhancedsru AT bailongliu fasttrajectorypredictionmethodwithattentionenhancedsru AT leizhang fasttrajectorypredictionmethodwithattentionenhancedsru AT susongyang fasttrajectorypredictionmethodwithattentionenhancedsru AT changxingshao fasttrajectorypredictionmethodwithattentionenhancedsru AT danson fasttrajectorypredictionmethodwithattentionenhancedsru |
_version_ |
1724181997958463488 |