Search Results for author: Andre Platzer

Found 1 papers, 1 papers with code

Verifiably Safe Off-Model Reinforcement Learning

1 code implementation14 Feb 2019 Nathan Fulton, Andre Platzer

Through a combination of design-time model updates and runtime model falsification, we provide a first approach toward obtaining formal safety proofs for autonomous systems acting in heterogeneous environments.

reinforcement-learning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.