no code implementations • 6 Feb 2024 • Shijun Liang, Evan Bell, Qing Qu, Rongrong Wang, Saiprasad Ravishankar
In this work, we first provide an analysis of how DIP recovers information from undersampled imaging measurements by analyzing the training dynamics of the underlying networks in the kernel regime for different architectures.
no code implementations • 3 Feb 2024 • Haitao Mao, Guangliang Liu, Yao Ma, Rongrong Wang, Jiliang Tang
In-Context Learning (ICL) empowers Large Language Models (LLMs) with the capacity to learn in context, achieving downstream generalization without gradient updates but with a few in-context examples.
no code implementations • 26 Oct 2023 • Guangliang Liu, Zhiyu Xue, Xitong Zhang, Kristen Marie Johnson, Rongrong Wang
Fine-tuning pretrained language models (PLMs) for downstream tasks is a large-scale optimization problem, in which the choice of the training algorithm critically determines how well the trained model can generalize to unseen test data, especially in the context of few-shot learning.
1 code implementation • 11 Sep 2023 • Ismail Alkhouri, Shijun Liang, Rongrong Wang, Qing Qu, Saiprasad Ravishankar
In particular, we present a robustification strategy that improves the resilience of DL-based MRI reconstruction methods by utilizing pretrained diffusion models as noise purifiers.
no code implementations • 30 May 2023 • Xitong Zhang, Avrajit Ghosh, Guangliang Liu, Rongrong Wang
It is widely recognized that the generalization ability of neural networks can be greatly enhanced through carefully designing the training procedure.
no code implementations • 2 Feb 2023 • Avrajit Ghosh, He Lyu, Xitong Zhang, Rongrong Wang
It is well known that the finite step-size ($h$) in Gradient Descent (GD) implicitly regularizes solutions to flatter minima.
no code implementations • 28 Aug 2022 • Santhosh Karnik, Rongrong Wang, Mark Iwen
The approach is based on the observation that the existence of a Johnson-Lindenstrauss embedding $A\in\mathbb{R}^{d\times D}$ of a given high-dimensional set $S\subset\mathbb{R}^D$ into a low dimensional cube $[-M, M]^d$ implies that for any H\"older (or uniformly) continuous function $f:S\to\mathbb{R}^p$, there exists a H\"older (or uniformly) continuous function $g:[-M, M]^d\to\mathbb{R}^p$ such that $g(Ax)=f(x)$ for all $x\in S$.
no code implementations • 13 Jun 2022 • A. Martina Neuman, Rongrong Wang, Yuying Xie
Graph Neural Networks (GNNs) have emerged as formidable resources for processing graph-based information across diverse applications.
1 code implementation • 21 Oct 2021 • Hang Cheng, Shugong Xu, Xiufeng Jiang, Rongrong Wang
In this paper, we propose a matting method that use Flexible Guidance Input as user hint, which means our method can use trimap, scribblemap or clickmap as guidance information or even work without any guidance input.
1 code implementation • 13 Jun 2021 • Mathias Louboutin, Ali Siahkoohi, Rongrong Wang, Felix J. Herrmann
Thanks to the combination of state-of-the-art accelerators and highly optimized open software frameworks, there has been tremendous progress in the performance of deep neural networks.
no code implementations • 18 Aug 2020 • Hao Guo, Xintao Ren, Rongrong Wang, Zhun Cai, Kai Shuang, Yue Sun
In this paper, we propose a model named HUIHEN (Hierarchical User Intention-Habit Extract Network) that leverages the users' behavior information in mobile banking APP.
no code implementations • ICLR 2021 • Xiaorui Liu, Yao Li, Rongrong Wang, Jiliang Tang, Ming Yan
Communication compression has become a key strategy to speed up distributed optimization.
1 code implementation • NeurIPS 2019 • He Lyu, Ningyu Sha, Shuyang Qin, Ming Yan, Yuying Xie, Rongrong Wang
This paper extends robust principal component analysis (RPCA) to nonlinear manifolds.
1 code implementation • 29 Sep 2019 • Rongrong Wang, Xiaopeng Zhang
We provide a rigorous mathematical treatment to the crowding issue in data visualization when high dimensional data sets are projected down to low dimensions for visualization.