Learning Minimal Representations with Model Invariance

29 Sep 2021  ·  Manan Tomar, Amy Zhang, Matthew E. Taylor ·

Sparsity has been identified as an important characteristic in learning neural networks that generalize well, forming the key idea in constructing minimal representations. Minimal representations are ones that only encode information required to predict well on a task and nothing more. In this paper we present a powerful approach to learning minimal representations. Our method, called ModInv or model invariance, argues for learning using multiple predictors and a single representation, creating a bottleneck architecture. Predictors' learning landscapes are diversified by training independently and with different learning rates. The common representation acts as a implicit invariance objective to avoid the different spurious correlations captured by individual predictors. This in turn leads to better generalization performance. ModInv is tested on both the Reinforcement Learning and the Self-supervised Learning settings, showcasing strong performance boosts in both. It is extremely simple to implement, does not lead to any delay in walk clock times while training, and can be applied across different problem settings.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here