Understanding Regularization to Visualize Convolutional Neural Networks

Variational methods for revealing visual concepts learned by convolutional neural networks have gained significant attention during the last years. Being based on noisy gradients obtained via back-propagation such methods require the application of regularization strategies. We present a mathematical framework unifying previously employed regularization methods. Within this framework, we propose a novel technique based on Sobolev gradients which can be implemented via convolutions and does not require specialized numerical treatment, such as total variation regularization. The experiments performed on feature inversion and activation maximization demonstrate the benefit of a unified approach to regularization, such as sharper reconstructions via the proposed Sobolev filters and a better control over reconstructed scales.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here