Enhancing Explainability of Neural Networks through Architecture Constraints

12 Jan 2019  ·  Zebin Yang, Aijun Zhang, Agus Sudjianto ·

Prediction accuracy and model explainability are the two most important objectives when developing machine learning algorithms to solve real-world problems. The neural networks are known to possess good prediction performance, but lack of sufficient model interpretability. In this paper, we propose to enhance the explainability of neural networks through the following architecture constraints: a) sparse additive subnetworks; b) projection pursuit with orthogonality constraint; and c) smooth function approximation. It leads to an explainable neural network (xNN) with the superior balance between prediction performance and model interpretability. We derive the necessary and sufficient identifiability conditions for the proposed xNN model. The multiple parameters are simultaneously estimated by a modified mini-batch gradient descent method based on the backpropagation algorithm for calculating the derivatives and the Cayley transform for preserving the projection orthogonality. Through simulation study under six different scenarios, we compare the proposed method to several benchmarks including least absolute shrinkage and selection operator, support vector machine, random forest, extreme learning machine, and multi-layer perceptron. It is shown that the proposed xNN model keeps the flexibility of pursuing high prediction accuracy while attaining improved interpretability. Finally, a real data example is employed as a showcase application.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here