Scope Compliance Uncertainty Estimate

The zeitgeist of the digital era has been dominated by an expanding integration of Artificial Intelligence~(AI) in a plethora of applications across various domains. With this expansion, however, questions of the safety and reliability of these methods come have become more relevant than ever. Consequently, a run-time ML model safety system has been developed to ensure the model's operation within the intended context, especially in applications whose environments are greatly variable such as Autonomous Vehicles~(AVs). SafeML is a model-agnostic approach for performing such monitoring, using distance measures based on statistical testing of the training and operational datasets; comparing them to a predetermined threshold, returning a binary value whether the model should be trusted in the context of the observed data or be deemed unreliable. Although a systematic framework exists for this approach, its performance is hindered by: (1) a dependency on a number of design parameters that directly affect the selection of a safety threshold and therefore likely affect its robustness, (2) an inherent assumption of certain distributions for the training and operational sets, as well as (3) a high computational complexity for relatively large sets. This work addresses these limitations by changing the binary decision to a continuous metric. Furthermore, all data distribution assumptions are made obsolete by implementing non-parametric approaches, and the computational speed increased by introducing a new distance measure based on the Empirical Characteristics Functions~(ECF).

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods