Traffic prediction is the task of predicting traffic volumes, utilising historical speed and volume data.
( Image credit: BaiduTraffic )
Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Ranked #5 on
Traffic Prediction
on PeMS-M
Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain.
Ranked #3 on
Traffic Prediction
on PEMS-BAY
MULTIVARIATE TIME SERIES FORECASTING SPATIO-TEMPORAL FORECASTING TIME SERIES TIME SERIES PREDICTION TRAFFIC PREDICTION
Timely accurate traffic forecast is crucial for urban traffic control and guidance.
Ranked #4 on
Traffic Prediction
on PeMS-M
However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time, namely, spatial dependence and temporal dependence.
To safely and efficiently navigate in complex urban traffic, autonomous vehicles must make responsible predictions in relation to surrounding traffic-agents (vehicles, bicycles, pedestrians, etc.).
AUTONOMOUS VEHICLES TRAFFIC PREDICTION TRAJECTORY PREDICTION
Traffic forecasting is a particularly challenging application of spatiotemporal forecasting, due to the time-varying traffic patterns and the complicated spatial dependencies on road networks.
We present a series of modifications which improve upon Graph WaveNet's previously state-of-the-art performance on the METR-LA traffic prediction task.
Ranked #2 on
Traffic Prediction
on METR-LA
Spatial-temporal graph modeling is an important task to analyze the spatial relations and temporal trends of components in a system.
Ranked #1 on
Traffic Prediction
on PEMS-BAY
Although both factors have been considered in modeling, existing works make strong assumptions about spatial dependence and temporal dynamics, i. e., spatial dependence is stationary in time, and temporal dynamics is strictly periodical.
Between the encoder and the decoder, a transform attention layer is applied to convert the encoded traffic features to generate the sequence representations of future time steps as the input of the decoder.