no code implementations • 22 Apr 2024 • Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining.
no code implementations • 22 Apr 2024 • Jingwen Ye, Xinchao Wang
The training of contemporary deep learning models heavily relies on publicly available data, posing a risk of unauthorized access to online data and raising concerns about data privacy.
no code implementations • 20 Dec 2023 • Jingwen Ye, Ruonan Yu, Songhua Liu, Xinchao Wang
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
1 code implementation • 31 May 2023 • KaiXuan Chen, Shunyu Liu, Tongtian Zhu, Tongya Zheng, Haofei Zhang, Zunlei Feng, Jingwen Ye, Mingli Song
Graph Neural Networks (GNNs) have emerged as a powerful category of learning architecture for handling graph-structured data.
1 code implementation • 19 Apr 2023 • Songhua Liu, Jingwen Ye, Xinchao Wang
Existing approaches either apply the holistic style of the style image in a global manner, or migrate local colors and textures of the style image to the content counterparts in a pre-defined way.
1 code implementation • CVPR 2023 • Jingwen Ye, Songhua Liu, Xinchao Wang
Unlike prior methods that update all or at least part of the parameters in the target network throughout the knowledge transfer process, PNC conducts partial parametric "cloning" from a source network and then injects the cloned module to the target, without modifying its parameters.
no code implementations • CVPR 2023 • Songhua Liu, Jingwen Ye, Runpeng Yu, Xinchao Wang
In this paper, we explore the problem of slimmable dataset condensation, to extract a smaller synthetic dataset given only previous condensation results.
1 code implementation • NIPS 2022 • Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang
In this paper, we study dataset distillation (DD), from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.
3 code implementations • 30 Oct 2022 • Songhua Liu, Kai Wang, Xingyi Yang, Jingwen Ye, Xinchao Wang
In this paper, we study \xw{dataset distillation (DD)}, from a novel perspective and introduce a \emph{dataset factorization} approach, termed \emph{HaBa}, which is a plug-and-play strategy portable to any existing DD baseline.
1 code implementation • 24 Oct 2022 • Xingyi Yang, Daquan Zhou, Songhua Liu, Jingwen Ye, Xinchao Wang
Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints.
1 code implementation • 7 Sep 2022 • Haoling Li, Jie Song, Mengqi Xue, Haofei Zhang, Jingwen Ye, Lechao Cheng, Mingli Song
This survey aims to present a comprehensive review of NTs and attempts to identify how they enhance the model interpretability.
1 code implementation • 17 Jul 2022 • Jingwen Ye, Yifang Fu, Jie Song, Xingyi Yang, Songhua Liu, Xin Jin, Mingli Song, Xinchao Wang
Life-long learning aims at learning a sequence of tasks without forgetting the previously acquired knowledge.
1 code implementation • 13 Jul 2022 • Songhua Liu, Jingwen Ye, Sucheng Ren, Xinchao Wang
Prior approaches, despite the promising results, have relied on either estimating dense attention to compute per-point matching, which is limited to only coarse scales due to the quadratic memory cost, or fixing the number of correspondences to achieve linear complexity, which lacks flexibility.
1 code implementation • 4 Jul 2022 • Xingyi Yang, Jingwen Ye, Xinchao Wang
The core idea of KF lies in the modularization and assemblability of knowledge: given a pretrained network model as input, KF aims to decompose it into several factor networks, each of which handles only a dedicated task and maintains task-specific knowledge factorized from the source network.
2 code implementations • 5 May 2022 • Jie Song, Ying Chen, Jingwen Ye, Mingli Song
Knowledge distillation (KD) has become a well established paradigm for compressing deep neural networks.
1 code implementation • 5 Dec 2021 • Jingwen Ye, Yining Mao, Jie Song, Xinchao Wang, Cheng Jin, Mingli Song
In other words, all users may employ a model in SDB for inference, but only authorized users get access to KD from the model.
1 code implementation • ICCV 2021 • Zheng Li, Jingwen Ye, Mingli Song, Ying Huang, Zhigeng Pan
However, existing pose distillation works rely on a heavy pre-trained estimator to perform knowledge transfer and require a complex two-stage learning procedure.
no code implementations • CVPR 2020 • Jingwen Ye, Yixin Ji, Xinchao Wang, Xin Gao, Mingli Song
Then a dual generator is trained by taking the output from the former generator as input.
1 code implementation • CVPR 2020 • Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, Mingli Song
In this paper, we propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
1 code implementation • 28 May 2019 • Jingwen Ye, Xinchao Wang, Yixin Ji, Kairi Ou, Mingli Song
Many well-trained Convolutional Neural Network(CNN) models have now been released online by developers for the sake of effortless reproducing.
1 code implementation • CVPR 2019 • Jingwen Ye, Yixin Ji, Xinchao Wang, Kairi Ou, Dapeng Tao, Mingli Song
In this paper, we investigate a novel deep-model reusing task.
8 code implementations • 11 May 2017 • Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, Mingli Song
We first propose a taxonomy of current algorithms in the field of NST.