no code implementations • 13 Feb 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury
The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data.
no code implementations • 6 Jan 2024 • Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury
Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model.
no code implementations • 22 Oct 2023 • Hao Di, Yi Yang, Haishan Ye, Xiangyu Chang
Personalization aims to characterize individual preferences and is widely applied across many fields.
no code implementations • 10 Oct 2023 • Ying Wu, Hanzhong Liu, Kai Ren, Xiangyu Chang
In the rule discovery phase, we utilize a causal forest to generate a pool of causal rules with corresponding subgroup average treatment effects.
no code implementations • 10 Jul 2023 • Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit K. Roy-Chowdhury, Ananda Theertha Suresh, Samet Oymak
These insights on scale and modularity motivate a new federated learning approach we call "You Only Load Once" (FedYolo): The clients load a full PTF model once and all future updates are accomplished through communication-efficient modules with limited catastrophic-forgetting, where each task is assigned to its own module.
no code implementations • 27 Jun 2023 • Xiao Guo, Xiang Li, Xiangyu Chang, Shujie Ma
To remove the bias incurred by RR and the squared network matrices, we develop a two-step bias-adjustment procedure.
1 code implementation • 18 Jun 2023 • Zhihong Liu, Hoang Anh Just, Xiangyu Chang, Xi Chen, Ruoxi Jia
Data valuation -- quantifying the contribution of individual data sources to certain predictive behaviors of a model -- is of great importance to enhancing the transparency of machine learning and designing incentive systems for data sharing.
no code implementations • 6 Jan 2023 • Shuai Liu, Xiao Guo, Shun Qi, Huaning Wang, Xiangyu Chang
In particular, we derive a closed-form expression for the local update step and use the iterative proximal projection method to deal with the group fused lasso penalty in the global update step.
no code implementations • 30 Oct 2022 • Mengmeng Wu, Ruoxi Jia, Changle lin, Wei Huang, Xiangyu Chang
Data valuation, especially quantifying data value in algorithmic prediction and decision-making, is a fundamental problem in data trading scenarios.
no code implementations • 11 May 2022 • Shuai Liu, Yixuan Qiu, Baojuan Li, Huaning Wang, Xiangyu Chang
We consider the problem of identifying alterations of brain functional connectivity for a single MDD patient.
no code implementations • 21 Sep 2021 • Yi Yang, Ying Wu, Mei Li, Xiangyu Chang, Yong Tan
Then, we transform the social welfare maximization problem into the risk minimization task in machine learning, and derive a fairness-aware scoring system with the help of mixed integer programming.
no code implementations • 3 Sep 2021 • Xiang Li, Jiadong Liang, Xiangyu Chang, Zhihua Zhang
Both the methods are communication efficient and applicable to online data.
no code implementations • 1 Mar 2021 • Xiao Guo, Xiang Li, Xiangyu Chang, Shusen Wang, Zhihua Zhang
The low communication power and the possible privacy breaches of data make the computation of eigenspace challenging.
no code implementations • 16 Dec 2020 • Xiangyu Chang, Yingcong Li, Samet Oymak, Christos Thrampoulidis
Deep networks are typically trained with many more parameters than the size of the training dataset.
no code implementations • 3 Sep 2020 • Shao-Bo Lin, Xiangyu Chang, Xingping Sun
Data sites selected from modeling high-dimensional problems often appear scattered in non-paternalistic ways.
no code implementations • 25 Apr 2020 • Xiao Guo, Yixuan Qiu, Hai Zhang, Xiangyu Chang
Directed networks are broadly used to represent asymmetric relationships among units.
no code implementations • 16 Mar 2020 • Yining Wang, Xi Chen, Xiangyu Chang, Dongdong Ge
In this paper, using the problem of demand function prediction in dynamic pricing as the motivating example, we study the problem of constructing accurate confidence intervals for the demand function.
no code implementations • 8 Mar 2020 • Yi Yang, Yuxuan Guo, Xiangyu Chang
To show the usefulness of the framework, two cost-sensitive multicategory boosting algorithms are derived as concrete instances.
no code implementations • 20 Jan 2020 • Hai Zhang, Xiao Guo, Xiangyu Chang
In this paper, we study the spectral clustering using randomized sketching algorithms from a statistical perspective, where we typically assume the network data are generated from a stochastic block model that is not necessarily of full rank.
no code implementations • 9 Jan 2020 • Xiangyu Chang, Shao-Bo Lin
In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms.
no code implementations • 29 Nov 2017 • Aven Samareh, Yan Jin, Zhangyang Wang, Xiangyu Chang, Shuai Huang
We present our preliminary work to determine if patient's vocal acoustic, linguistic, and facial patterns could predict clinical ratings of depression severity, namely Patient Health Questionnaire depression scale (PHQ-8).
no code implementations • 28 Feb 2017 • Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang
This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.
no code implementations • 23 Jan 2016 • Xiangyu Chang, Shao-Bo Lin, Yao Wang
After theoretically analyzing the pros and cons, we find that although the divide and conquer local average regression can reach the optimal learning rate, the restric- tion to the number of data blocks is a bit strong, which makes it only feasible for small number of data blocks.
no code implementations • 31 Mar 2014 • Xiangyu Chang, Yu Wang, Rongjian Li, Zongben Xu
Nevertheless, this framework has two serious drawbacks: One is that the solution of the framework unavoidably involves a considerable portion of redundant noise features in many situations, and the other is that the framework neither offers intuitive explanations on why this framework can select relevant features nor leads to any theoretical guarantee for feature selection consistency.