Certifying Robustness of Graph Laplacian Based Semi-Supervised Learning
Graph Laplacian (GL)-based semi-supervised learning is one of the most used approaches for classifying nodes in a graph. Understanding and certifying the adversarial robustness of machine learning (ML) algorithms have attracted large amounts of attention from different research communities due to its crucial importance in many security-critical applied domains. There is a great interest in the theoretical certification of adversarial robustness for popular ML algorithms. In this paper, we provide the first adversarial robust certification for the GL classifier. Within a certain adversarial perturbation regime, we prove that GL with a $k$-nearest neighbor graph is intrinsically more robust than the $k$-nearest neighbor classifier. Numerically, we show that the robustness of the GL classifier outperforms $k$NN by a remarkable margin. Leveraging existing adversarial defenses, we empirically show that the robustness of the GL classifier can be increased further.
PDF Abstract