1 code implementation • 20 Jul 2023 • Mahsa Forouzesh, Patrick Thiran
We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets.
1 code implementation • 8 Dec 2022 • Mahsa Forouzesh, Hanie Sedghi, Patrick Thiran
We empirically show the effectiveness of our metric in tracking memorization on various architectures and datasets and provide theoretical insights into the design of the susceptibility metric.
1 code implementation • 14 Jul 2021 • Mahsa Forouzesh, Patrick Thiran
We propose a metric for evaluating the generalization ability of deep neural networks trained with mini-batch gradient descent.
no code implementations • 1 Jan 2021 • Mahsa Forouzesh, Patrick Thiran
Validation-based early-stopping methods are one of the most popular techniques used to avoid over-training deep neural networks.
1 code implementation • 30 Jul 2020 • Mahsa Forouzesh, Farnood Salehi, Patrick Thiran
We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing the generalization performance of networks, without requiring labeled data.
no code implementations • 25 Sep 2019 • Mahsa Forouzesh, Farnood Salehi, Patrick Thiran
We find a rather strong empirical relation between the output sensitivity and the variance in the bias-variance decomposition of the loss function, which hints on using sensitivity as a metric for comparing generalization performance of networks, without requiring labeled data.