Generative Models

Sparse Autoencoder

A Sparse Autoencoder is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within a layer. The sparsity constraint can be imposed with L1 regularization or a KL divergence between expected average neuron activation to an ideal distribution $p$.

Image: Jeff Jordan. Read his blog post (click) for a detailed summary of autoencoders.

Papers


Paper Code Results Date Stars

Tasks


Components


Component Type
L1 Regularization
Regularization (optional)

Categories