Binary autoencoder with random binary weights

30 Apr 2020  ·  Viacheslav Osaulenko ·

Here is presented an analysis of an autoencoder with binary activations $\{0, 1\}$ and binary $\{0, 1\}$ random weights. Such set up puts this model at the intersection of different fields: neuroscience, information theory, sparse coding, and machine learning. It is shown that the sparse activation of the hidden layer arises naturally in order to preserve information between layers. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error for any input just by varying the thresholds of neurons. The model preserves the similarity of inputs at the hidden layer that is maximal for the dense hidden layer activation. By analyzing the mutual information between layers it is shown that the difference between sparse and dense representations is related to a memory-computation trade-off. The model is similar to an olfactory perception system of a fruit fly, and the presented theoretical results give useful insights toward understanding more complex neural networks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods