Robustifying Point Cloud Networks by Refocusing

10 Aug 2023  ·  Meir Yossef Levi, Guy Gilboa ·

The ability to cope with out-of-distribution (OOD) corruptions and adversarial attacks is crucial in real-world safety-demanding applications. In this study, we develop a general mechanism to increase neural network robustness based on focus analysis. Recent studies have revealed the phenomenon of \textit{Overfocusing}, which leads to a performance drop. When the network is primarily influenced by small input regions, it becomes less robust and prone to misclassify under noise and corruptions. However, quantifying overfocusing is still vague and lacks clear definitions. Here, we provide a mathematical definition of \textbf{focus}, \textbf{overfocusing} and \textbf{underfocusing}. The notions are general, but in this study, we specifically investigate the case of 3D point clouds. We observe that corrupted sets result in a biased focus distribution compared to the clean training set. We show that as focus distribution deviates from the one learned in the training phase - classification performance deteriorates. We thus propose a parameter-free \textbf{refocusing} algorithm that aims to unify all corruptions under the same distribution. We validate our findings on a 3D zero-shot classification task, achieving SOTA in robust 3D classification on ModelNet-C dataset, and in adversarial defense against Shape-Invariant attack. Code is available in: https://github.com/yossilevii100/refocusing.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Cloud Classification PointCloud-C Critical Points++&EPiC(RPC) mean Corruption Error (mCE) 0.476 # 1
Point Cloud Classification PointCloud-C Critical Points++&EPiC(GDANet) mean Corruption Error (mCE) 0.493 # 3
Point Cloud Classification PointCloud-C Critical Points++&EPiC(DGCNN) mean Corruption Error (mCE) 0.484 # 2

Methods