Search Results for author: Michael Tetelman

Found 4 papers, 0 papers with code

Bayesian Attention Networks for Data Compression

no code implementations29 Mar 2021 Michael Tetelman

Bayesian Attention Networks are defined by introducing an attention factor per a training sample loss as a function of two sample inputs, from training sample and prediction sample.

Data Compression

On Compression Principle and Bayesian Optimization for Neural Networks

no code implementations23 Jun 2020 Michael Tetelman

Finding methods for making generalizable predictions is a fundamental problem of machine learning.

Bayesian Optimization Dimensionality Reduction

VARIATIONAL SGD: DROPOUT , GENERALIZATION AND CRITICAL POINT AT THE END OF CONVEXITY

no code implementations ICLR 2019 Michael Tetelman

Among stationary solutions of the update rules there are trivial solutions with zero variances at local minima of the original loss and a single non-trivial solution with finite variances that is a critical point at the end of convexity of the effective loss in the mean-variance space.

Continuous Learning: Engineering Super Features With Feature Algebras

no code implementations19 Dec 2013 Michael Tetelman

We propose an iterative procedure for deriving a sequence of improving models and a corresponding sequence of sets of non-linear features on the original input space.

Cannot find the paper you are looking for? You can Submit a new open access paper.