A Mathematical Theory of Learning

7 May 2014  ·  Ibrahim Alabdulmohsin ·

In this paper, a mathematical theory of learning is proposed that has many parallels with information theory. We consider Vapnik's General Setting of Learning in which the learning process is defined to be the act of selecting a hypothesis in response to a given training set. Such hypothesis can, for example, be a decision boundary in classification, a set of centroids in clustering, or a set of frequent item-sets in association rule mining. Depending on the hypothesis space and how the final hypothesis is selected, we show that a learning process can be assigned a numeric score, called learning capacity, which is analogous to Shannon's channel capacity and satisfies similar interesting properties as well such as the data-processing inequality and the information-cannot-hurt inequality. In addition, learning capacity provides the tightest possible bound on the difference between true risk and empirical risk of the learning process for all loss functions that are parametrized by the chosen hypothesis. It is also shown that the notion of learning capacity equivalently quantifies how sensitive the choice of the final hypothesis is to a small perturbation in the training set. Consequently, algorithmic stability is both necessary and sufficient for generalization. While the theory does not rely on concentration inequalities, we finally show that analogs to classical results in learning theory using the Probably Approximately Correct (PAC) model can be immediately deduced using this theory, and conclude with information-theoretic bounds to learning capacity.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here