no code implementations • 1 Feb 2024 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu
This paper serves as a bridge, addressing the gap by providing a unifying framework of machine unlearning for image-to-image generative models.
no code implementations • 1 Feb 2024 • Hsiang Hsu, Guihong Li, Shaohan Hu, Chun-Fu, Chen
Predictive multiplicity refers to the phenomenon in which classification tasks may admit multiple competing models that achieve almost-equally-optimal performance, yet generate conflicting outputs for individual samples.
no code implementations • 22 Dec 2023 • Guihong Li, Hsiang Hsu, Chun-Fu Chen, Radu Marculescu
The rapid growth of machine learning has spurred legislative initiatives such as ``the Right to be Forgotten,'' allowing users to request data removal.
1 code implementation • 5 Jul 2023 • Guihong Li, Duc Hoang, Kartikeya Bhardwaj, Ming Lin, Zhangyang Wang, Radu Marculescu
Recently, zero-shot (or training-free) Neural Architecture Search (NAS) approaches have been proposed to liberate NAS from the expensive training process.
no code implementations • 13 May 2023 • Guihong Li, Kartikeya Bhardwaj, Yuedong Yang, Radu Marculescu
Anytime neural networks (AnytimeNNs) are a promising solution to adaptively adjust the model complexity at runtime under various hardware resource constraints.
1 code implementation • 26 Jan 2023 • Guihong Li, Yuedong Yang, Kartikeya Bhardwaj, Radu Marculescu
Based on this theoretical analysis, we propose a new zero-shot proxy, ZiCo, the first proxy that works consistently better than #Params.
1 code implementation • CVPR 2023 • Yuedong Yang, Guihong Li, Radu Marculescu
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI.
no code implementations • 1 Aug 2021 • Guihong Li, Sumit K. Mandal, Umit Y. Ogras, Radu Marculescu
This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform.
no code implementations • 1 Jan 2021 • Kartikeya Bhardwaj, Guihong Li, Radu Marculescu
(ii) Can certain topological characteristics of deep networks indicate a priori (i. e., without training) which models, with a different number of parameters/FLOPS/layers, achieve a similar accuracy?
2 code implementations • CVPR 2021 • Kartikeya Bhardwaj, Guihong Li, Radu Marculescu
In this paper, we reveal that the topology of the concatenation-type skip connections is closely related to the gradient propagation which, in turn, enables a predictable behavior of DNNs' test performance.