NeSyFOLD: Neurosymbolic Framework for Interpretable Image Classification

30 Jan 2023  ·  Parth Padalkar, Huaduo Wang, Gopal Gupta ·

Deep learning models such as CNNs have surpassed human performance in computer vision tasks such as image classification. However, despite their sophistication, these models lack interpretability which can lead to biased outcomes reflecting existing prejudices in the data. We aim to make predictions made by a CNN interpretable. Hence, we present a novel framework called NeSyFOLD to create a neurosymbolic (NeSy) model for image classification tasks. The model is a CNN with all layers following the last convolutional layer replaced by a stratified answer set program (ASP). A rule-based machine learning algorithm called FOLD-SE-M is used to derive the stratified answer set program from binarized filter activations of the last convolutional layer. The answer set program can be viewed as a rule-set, wherein the truth value of each predicate depends on the activation of the corresponding kernel in the CNN. The rule-set serves as a global explanation for the model and is interpretable. A justification for the predictions made by the NeSy model can be obtained using an ASP interpreter. We also use our NeSyFOLD framework with a CNN that is trained using a sparse kernel learning technique called Elite BackProp (EBP). This leads to a significant reduction in rule-set size without compromising accuracy or fidelity thus improving scalability of the NeSy model and interpretability of its rule-set. Evaluation is done on datasets with varied complexity and sizes. To make the rule-set more intuitive to understand, we propose a novel algorithm for labelling each kernel's corresponding predicate in the rule-set with the semantic concept(s) it learns. We evaluate the performance of our "semantic labelling algorithm" to quantify the efficacy of the semantic labelling for both the NeSy model and the NeSy-EBP model.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here