Adaptive Reinforcement Learning Model for Simulation of Urban Mobility during Crises

The objective of this study is to propose and test an adaptive reinforcement learning model that can learn the patterns of human mobility in a normal context and simulate the mobility during perturbations caused by crises, such as flooding, wildfire, and hurricanes. Understanding and predicting human mobility patterns, such as destination and trajectory selection, can inform emerging congestion and road closures raised by disruptions in emergencies. Data related to human movement trajectories are scarce, especially in the context of emergencies, which places a limitation on applications of existing urban mobility models learned from empirical data. Models with the capability of learning the mobility patterns from data generated in normal situations and which can adapt to emergency situations are needed to inform emergency response and urban resilience assessments. To address this gap, this study creates and tests an adaptive reinforcement learning model that can predict the destinations of movements, estimate the trajectory for each origin and destination pair, and examine the impact of perturbations on humans' decisions related to destinations and movement trajectories. The application of the proposed model is shown in the context of Houston and the flooding scenario caused by Hurricane Harvey in August 2017. The results show that the model can achieve more than 76\% precision and recall. The results also show that the model could predict traffic patterns and congestion resulting from to urban flooding. The outcomes of the analysis demonstrate the capabilities of the model for analyzing urban mobility during crises, which can inform the public and decision-makers about the response strategies and resilience planning to reduce the impacts of crises on urban mobility.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here