Search Results for author: Meir Feder

Found 17 papers, 3 papers with code

Error Exponent in Agnostic PAC Learning

no code implementations1 May 2024 Adi Hendel, Meir Feder

In this paper, we consider PAC learning using a somewhat different tradeoff, the error exponent - a well established analysis method in Information Theory - which describes the exponential behavior of the probability that the risk will exceed a certain threshold as function of the sample size.

Binary Classification Knowledge Distillation +2

Batches Stabilize the Minimum Norm Risk in High Dimensional Overparameterized Linear Regression

no code implementations14 Jun 2023 Shahar Stein Ioushua, Inbar Hasidim, Ofer Shayevitz, Meir Feder

Learning algorithms that divide the data into batches are prevalent in many machine-learning applications, typically offering useful trade-offs between computational efficiency and performance.

Computational Efficiency regression

Beyond Ridge Regression for Distribution-Free Data

no code implementations17 Jun 2022 Koby Bibas, Meir Feder

In the context of online prediction where the min-max solution is the Normalized Maximum Likelihood (NML), it has been suggested to use NML with ``luckiness'': A prior-like function is applied to the hypothesis class, which reduces its effective size.

regression

Single Layer Predictive Normalized Maximum Likelihood for Out-of-Distribution Detection

1 code implementation NeurIPS 2021 Koby Bibas, Meir Feder, Tal Hassner

Furthermore, we describe how to efficiently apply the derived pNML regret to any pretrained deep NN, by employing the explicit pNML for the last layer, followed by the softmax function.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Utilizing Adversarial Targeted Attacks to Boost Adversarial Robustness

no code implementations4 Sep 2021 Uriya Pesso, Koby Bibas, Meir Feder

Specifically, our defense performs adversarial targeted attacks according to different hypotheses, where each hypothesis assumes a specific label for the test sample.

Adversarial Attack Adversarial Robustness

Distribution Free Uncertainty for the Minimum Norm Solution of Over-parameterized Linear Regression

no code implementations14 Feb 2021 Koby Bibas, Meir Feder

Modern machine learning models do not obey this paradigm: They produce an accurate prediction even with a perfect fit to the training set.

Learning Theory regression

Sequential prediction under log-loss and misspecification

no code implementations29 Jan 2021 Meir Feder, Yury Polyanskiy

The well-specified case corresponds to an additional assumption that the data-generating distribution belongs to the hypothesis class as well.

Density Estimation Model Selection

Efficient Data-Dependent Learnability

no code implementations20 Nov 2020 Yaniv Fogel, Tal Shapira, Meir Feder

This approach has yields a learnability measure that can also be interpreted as a stability measure.

Can Implicit Bias Explain Generalization? Stochastic Convex Optimization as a Case Study

no code implementations NeurIPS 2020 Assaf Dauber, Meir Feder, Tomer Koren, Roi Livni

The notion of implicit bias, or implicit regularization, has been suggested as a means to explain the surprising generalization ability of modern-days overparameterized learning algorithms.

Universal Learning Approach for Adversarial Defense

no code implementations25 Sep 2019 Uriya Pesso, Koby Bibas, Meir Feder

In particular, we follow the recently suggested Predictive Normalized Maximum Likelihood (pNML) scheme for universal learning, whose goal is to optimally compete with a reference learner that knows the true label of the test sample but is restricted to use a learner from a given hypothesis class.

Adversarial Defense

Deep pNML: Predictive Normalized Maximum Likelihood for Deep Neural Networks

1 code implementation28 Apr 2019 Koby Bibas, Yaniv Fogel, Meir Feder

Finally, we extend the pNML to a ``twice universal'' solution, that provides universality for model class selection and generates a learner competing with the best one from all model classes.

Universal Supervised Learning for Individual Data

no code implementations22 Dec 2018 Yaniv Fogel, Meir Feder

Universal supervised learning is considered from an information theoretic point of view following the universal prediction approach, see Merhav and Feder (1998).

Non-linear Canonical Correlation Analysis: A Compressed Representation Approach

no code implementations31 Oct 2018 Amichai Painsky, Meir Feder, Naftali Tishby

In this work we introduce an information-theoretic compressed representation framework for the non-linear CCA problem (CRCCA), which extends the classical ACE approach.

Dimensionality Reduction Quantization +1

Linear Independent Component Analysis over Finite Fields: Algorithms and Bounds

no code implementations16 Sep 2018 Amichai Painsky, Saharon Rosset, Meir Feder

Importantly, we show that the overhead of our suggested algorithm (compared with the lower bound) typically decreases, as the scale of the problem grows.

Outperforming Good-Turing: Preliminary Report

no code implementations6 Jul 2018 Amichai Painsky, Meir Feder

Estimating a large alphabet probability distribution from a limited number of samples is a fundamental problem in machine learning and statistics.

Accumulation of nonlinear interference noise in fiber-optic systems

no code implementations23 Oct 2013 Ronen Dar, Meir Feder, Antonio Mecozzi, Mark Shtaif

Through a series of extensive system simulations we show that all of the previously not understood discrepancies between the Gaussian noise (GN) model and simulations can be attributed to the omission of an important, recently reported, fourth-order noise (FON) term, that accounts for the statistical dependencies within the spectrum of the interfering channel.

Optics

Cannot find the paper you are looking for? You can Submit a new open access paper.