Search Results for author: Mahsa Forouzesh

Found 6 papers, 4 papers with code

Differences Between Hard and Noisy-labeled Samples: An Empirical Study

1 code implementation20 Jul 2023 Mahsa Forouzesh, Patrick Thiran

We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets.

Leveraging Unlabeled Data to Track Memorization

1 code implementation8 Dec 2022 Mahsa Forouzesh, Hanie Sedghi, Patrick Thiran

We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric.

Memorization

Disparity Between Batches as a Signal for Early Stopping

1 code implementation14 Jul 2021 Mahsa Forouzesh, Patrick Thiran

We propose a metric for evaluating the generalization ability of deep neural networks trained with mini-batch gradient descent.

Early Stopping by Gradient Disparity

no code implementations1 Jan 2021 Mahsa Forouzesh, Patrick Thiran

Validation-based early-stopping methods are one of the most popular techniques used to avoid over-training deep neural networks.

Generalization Comparison of Deep Neural Networks via Output Sensitivity

1 code implementation30 Jul 2020 Mahsa Forouzesh, Farnood Salehi, Patrick Thiran

We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing the generalization performance of networks, without requiring labeled data.

On the Reflection of Sensitivity in the Generalization Error

no code implementations25 Sep 2019 Mahsa Forouzesh, Farnood Salehi, Patrick Thiran

We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data.

Cannot find the paper you are looking for? You can Submit a new open access paper.