no code implementations • 22 Apr 2024 • Sheng Liu, Zhiqiang Yao, Xuemeng Cao, Xiaowen Cai
Recent years, people have put forward higher and higher requirements for context-adaptive navigation (CAN).
no code implementations • 1 Apr 2024 • Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, Diyi Yang, Christopher Potts, Christopher D Manning, James Y. Zou
To address this gap, we conduct the first systematic, large-scale analysis across 950, 965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals, using a population-level statistical framework to measure the prevalence of LLM-modified content over time.
no code implementations • 11 Mar 2024 • Weixin Liang, Zachary Izzo, Yaohui Zhang, Haley Lepp, Hancheng Cao, Xuandong Zhao, Lingjiao Chen, Haotian Ye, Sheng Liu, Zhi Huang, Daniel A. McFarland, James Y. Zou
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM).
no code implementations • 13 Feb 2024 • Sheng Liu, Zihan Wang, Qi Lei
In this work, we propose a strong reconstruction attack in the setting of federated learning.
no code implementations • 10 Dec 2023 • Jianwei Li, Sheng Liu, Qi Lei
Language models trained via federated learning (FL) demonstrate impressive capabilities in handling complex tasks while protecting user privacy.
no code implementations • 27 Nov 2023 • Weicheng Zhu, Sheng Liu, Carlos Fernandez-Granda, Narges Razavian
Self-supervised learning (SSL) has emerged as a powerful technique for learning rich representations from unlabeled data.
1 code implementation • 11 Nov 2023 • Sheng Liu, Haotian Ye, Lei Xing, James Zou
On a new query, instead of adding demonstrations to the prompt, we shift the latent states of the LLM using the ICV.
no code implementations • 14 Aug 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Tongliang Liu
In real-world datasets, noisy labels are pervasive.
no code implementations • 11 Jul 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu
In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.
no code implementations • CVPR 2023 • Sheng Liu, Cong Phuoc Huynh, Cong Chen, Maxim Arap, Raffay Hamid
We present a simple yet effective self-supervised pre-training method for image harmonization which can leverage large-scale unannotated image datasets.
1 code implementation • 28 Dec 2022 • Zi'an Xu, Yin Dai, Fayu Liu, Weibing Chen, Yue Liu, Lifu Shi, Sheng Liu, YuHang Zhou
The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets.
no code implementations • 23 Dec 2022 • Xiao Li, Sheng Liu, Jinxin Zhou, Xinyu Lu, Carlos Fernandez-Granda, Zhihui Zhu, Qing Qu
As model size continues to grow and access to labeled training data remains limited, transfer learning has become a popular approach in many scientific and engineering fields.
no code implementations • ICCV 2023 • Huaxi Huang, Hui Kang, Sheng Liu, Olivier Salvado, Thierry Rakotoarivelo, Dadong Wang, Tongliang Liu
The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels.
1 code implementation • 2 Dec 2022 • Sheng Liu, Xu Zhang, Nitesh Sekhar, Yue Wu, Prateek Singhal, Carlos Fernandez-Granda
Empirical studies suggest that machine learning models trained with empirical risk minimization (ERM) often rely on attributes that may be spuriously correlated with the class labels.
1 code implementation • 2 Nov 2022 • Ruiyuan Lin, Sheng Liu, Jun Jiang, Shujun Li, Chengqing Li, C. -C. Jay Kuo
Recovering unknown, missing, damaged, distorted, or lost information in DCT coefficients is a common task in multiple applications of digital image processing, including image compression, selective image encryption, and image communication.
1 code implementation • CVPR 2023 • Kangning Liu, Weicheng Zhu, Yiqiu Shen, Sheng Liu, Narges Razavian, Krzysztof J. Geras, Carlos Fernandez-Granda
The framework employs a novel self-paced sampling strategy to ensure the accuracy of pseudo labels.
no code implementations • 4 Oct 2022 • Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu
We extend such results and show through global solution and landscape analyses that a broad family of loss functions including commonly used label smoothing (LS) and focal loss (FL) exhibits Neural Collapse.
1 code implementation • CVPR 2022 • Xiao Lu, Yihong Cao, Sheng Liu, Chengjiang Long, Zipei Chen, Xuanyu Zhou, Yimin Yang, Chunxia Xiao
Our proposed approach is extensively validated on the ViSha dataset and a self-annotated dataset.
no code implementations • 14 Jun 2022 • Yuan Feng, Yaojun Hu, Pengfei Fang, Yanhong Yang, Sheng Liu, ShengYong Chen
However, jointly removing the rain and haze in scene images is ill-posed and challenging, where the existence of haze and rain and the change of atmosphere light, can both degrade the scene information.
no code implementations • 7 Jun 2022 • Zi'an Xu, Yin Dai, Fayu Liu, Siqi Li, Sheng Liu, Lifu Shi, Jun Fu
Preoperative tumor localization, differential diagnosis, and subsequent selection of appropriate treatment for parotid gland tumors are critical.
1 code implementation • CVPR 2022 • Sheng Liu, Xiaohan Nie, Raffay Hamid
We demonstrate that our approach: (a) significantly improves the quality of 3-D reconstruction for our small-parallax setting, (b) does not cause any degradation for data with large-parallax, and (c) maintains the generalizability and scalability of geometry-based sparse SfM.
1 code implementation • CVPR 2022 • Li Yi, Sheng Liu, Qi She, A. Ian McLeod, Boyu Wang
To address this issue, we focus on learning robust contrastive representations of data on which the classifier is hard to memorize the label noise under the CE loss.
1 code implementation • 28 Feb 2022 • Sheng Liu, Zhihui Zhu, Qing Qu, Chong You
In this work, we propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted.
Ranked #1 on Learning with noisy labels on CIFAR-10N-Random3
1 code implementation • 28 Dec 2021 • Tiehang Duan, Zhenyi Wang, Sheng Liu, Sargur N. Srihari, Hui Yang
In this work, we proposed an uncertainty estimation and reduction model (UNCER) to quantify and mitigate the uncertainty during the EEG decoding process.
no code implementations • 21 Nov 2021 • Sheng Liu, Aakash Kaku, Weicheng Zhu, Matan Leibovich, Sreyas Mohan, Boyang Yu, Haoxiang Huang, Laure Zanna, Narges Razavian, Jonathan Niles-Weed, Carlos Fernandez-Granda
Reliable probability estimation is of crucial importance in many real-world applications where there is inherent (aleatoric) uncertainty.
2 code implementations • CVPR 2022 • Sheng Liu, Kangning Liu, Weicheng Zhu, Yiqiu Shen, Carlos Fernandez-Granda
We discover a phenomenon that has been previously reported in the context of classification: the networks tend to first fit the clean pixel-level labels during an "early-learning" phase, before eventually memorizing the false annotations.
1 code implementation • 15 Aug 2021 • Jiahao Wang, Yunhong Wang, Sheng Liu, Annan Li
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications, whereas the data of rare fine-grained categories is very limited.
no code implementations • 8 Aug 2021 • Sheng Liu, Kevin Lin, Lijuan Wang, Junsong Yuan, Zicheng Liu
We introduce the task of open-vocabulary visual instance search (OVIS).
no code implementations • 23 Jun 2021 • Xiaozhen Xie, Sheng Liu
In this paper, we propose the multi-modal and frequency-weighted tensor nuclear norm (MFWTNN) and the non-convex MFWTNN for HSI denoising tasks.
1 code implementation • NeurIPS 2021 • Sheng Liu, Xiao Li, Yuexiang Zhai, Chong You, Zhihui Zhu, Carlos Fernandez-Granda, Qing Qu
Furthermore, we show that our ConvNorm can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network, leading to easier training and improved robustness for deep ConvNets.
no code implementations • 19 Jan 2021 • Sheng Liu, Xiaozhen Xie, Wenfeng Kong
In the Fourier transform domain of HSIs, different frequency slices (FS) contain different information; different singular values (SVs) of each FS also represent different information.
1 code implementation • 22 Dec 2020 • Liye Mei, Yalan Yu, Yueyun Weng, Xiaopeng Guo, Yan Liu, Du Wang, Sheng Liu, Fuling Zhou, Cheng Lei
Since manual analysis is highly time and effort consuming, computer-assisted automatic chromosome karyotype analysis based on images is routinely used to improve the efficiency and accuracy of the analysis.
no code implementations • 21 Aug 2020 • Sheng Liu, Zuo-Jun Max Shen, Xiang Ji
We formalize the bike lane planning problem in view of the cyclists' utility functions and derive an integer optimization model to maximize the utility.
2 code implementations • NeurIPS 2020 • Sheng Liu, Jonathan Niles-Weed, Narges Razavian, Carlos Fernandez-Granda
In contrast with existing approaches, which use the model output during early learning to detect the examples with clean labels, and either ignore or attempt to correct the false labels, we take a different route and instead capitalize on early learning via regularization.
Ranked #4 on Learning with noisy labels on CIFAR-10N-Random2
no code implementations • 14 May 2020 • Tianhang Zheng, Sheng Liu, Changyou Chen, Junsong Yuan, Baochun Li, Kui Ren
We first formulate generation of adversarial skeleton actions as a constrained optimization problem by representing or approximating the physiological and physical constraints with mathematical formulations.
1 code implementation • 9 Nov 2019 • Sheng Liu, Chhavi Yadav, Carlos Fernandez-Granda, Narges Razavian
Early detection is a crucial goal in the study of Alzheimer's Disease (AD).
1 code implementation • 15 Sep 2019 • Ye Yuan, Junlin Li, Liang Li, Frank Jiang, Xiuchuan Tang, Fumin Zhang, Sheng Liu, Jorge Goncalves, Henning U. Voss, Xiuting Li, Jürgen Kurths, Han Ding
The study presents a general framework for discovering underlying Partial Differential Equations (PDEs) using measured spatiotemporal data.
no code implementations • 12 May 2019 • Brett Bernstein, Sheng Liu, Chrysa Papadaniil, Carlos Fernandez-Granda
In this work, we consider separable inverse problems, where the data are modeled as a linear combination of functions that depend nonlinearly on certain parameters of interest.
no code implementations • 9 Apr 2019 • Sheng Liu, Mark Cheng, Hayley Brooks, Wayne Mackey, David J. Heeger, Esteban G. Tabak, Carlos Fernandez-Granda
We apply our methodology to detect anomalous individuals, to cluster the cohort into groups with different sleeping tendencies, and to obtain improved predictions of future sleep behavior.
1 code implementation • 6 Sep 2018 • Shiqi Liu, Jingxin Liu, Qian Zhao, Xiangyong Cao, Huibin Li, Hongy-ing Meng, Sheng Liu, Deyu Meng
In the field of machine learning, it is still a critical issue to identify and supervise the learned representation without manually intervening or intuition assistance to extract useful knowledge or serve for the downstream tasks.
no code implementations • 5 Mar 2018 • Zhiyang Liu, Chen Cao, Shuxue Ding, Tong Han, Hong Wu, Sheng Liu
The patient with ischemic stroke can benefit most from the earliest possible definitive diagnosis.
no code implementations • 18 Feb 2017 • Chunlei Li, Guangshuai Gao, Zhoufeng Liu, Di Huang, Sheng Liu, Miao Yu
In order to accurately detect defects in patterned fabric images, a novel detection algorithm based on Gabor-HOG (GHOG) and low-rank decomposition is proposed in this paper.