A Comprehensive Review of Trends, Applications and Challenges In Out-of-Distribution Detection

26 Sep 2022  ·  Navid Ghassemi, Ehsan Fazl-Ersi ·

With recent advancements in artificial intelligence, its applications can be seen in every aspect of humans' daily life. From voice assistants to mobile healthcare and autonomous driving, we rely on the performance of AI methods for many critical tasks; therefore, it is essential to assert the performance of models in proper means to prevent damage. One of the shortfalls of AI models in general, and deep machine learning in particular, is a drop in performance when faced with shifts in the distribution of data. Nonetheless, these shifts are always expected in real-world applications; thus, a field of study has emerged, focusing on detecting out-of-distribution data subsets and enabling a more comprehensive generalization. Furthermore, as many deep learning based models have achieved near-perfect results on benchmark datasets, the need to evaluate these models' reliability and trustworthiness for pushing towards real-world applications is felt more strongly than ever. This has given rise to a growing number of studies in the field of out-of-distribution detection and domain generalization, which begs the need for surveys that compare these studies from various perspectives and highlight their straightens and weaknesses. This paper presents a survey that, in addition to reviewing more than 70 papers in this field, presents challenges and directions for future works and offers a unifying look into various types of data shifts and solutions for better generalization.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here