Search Results for author: Xiangyu Chang

Found 24 papers, 1 papers with code

FLASH: Federated Learning Across Simultaneous Heterogeneities

no code implementations13 Feb 2024 Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

The key premise of federated learning (FL) is to train ML models across a diverse set of data-owners (clients), without exchanging local data.

Federated Learning Multi-Armed Bandits

Plug-and-Play Transformer Modules for Test-Time Adaptation

no code implementations6 Jan 2024 Xiangyu Chang, Sk Miraj Ahmed, Srikanth V. Krishnamurthy, Basak Guler, Ananthram Swami, Samet Oymak, Amit K. Roy-Chowdhury

Parameter-efficient tuning (PET) methods such as LoRA, Adapter, and Visual Prompt Tuning (VPT) have found success in enabling adaptation to new domains by tuning small modules within a transformer model.

Test-time Adaptation Visual Prompt Tuning

PPFL: A Personalized Federated Learning Framework for Heterogeneous Population

no code implementations22 Oct 2023 Hao Di, Yi Yang, Haishan Ye, Xiangyu Chang

Personalization aims to characterize individual preferences and is widely applied across many fields.

Personalized Federated Learning

Causal Rule Learning: Enhancing the Understanding of Heterogeneous Treatment Effect via Weighted Causal Rules

no code implementations10 Oct 2023 Ying Wu, Hanzhong Liu, Kai Ren, Xiangyu Chang

In the rule discovery phase, we utilize a causal forest to generate a pool of causal rules with corresponding subgroup average treatment effects.

Descriptive

FedYolo: Augmenting Federated Learning with Pretrained Transformers

no code implementations10 Jul 2023 Xuechen Zhang, Mingchen Li, Xiangyu Chang, Jiasi Chen, Amit K. Roy-Chowdhury, Ananda Theertha Suresh, Samet Oymak

These insights on scale and modularity motivate a new federated learning approach we call "You Only Load Once" (FedYolo): The clients load a full PTF model once and all future updates are accomplished through communication-efficient modules with limited catastrophic-forgetting, where each task is assigned to its own module.

Federated Learning

Privacy-Preserving Community Detection for Locally Distributed Multiple Networks

no code implementations27 Jun 2023 Xiao Guo, Xiang Li, Xiangyu Chang, Shujie Ma

To remove the bias incurred by RR and the squared network matrices, we develop a two-step bias-adjustment procedure.

Clustering Community Detection +2

2D-Shapley: A Framework for Fragmented Data Valuation

1 code implementation18 Jun 2023 Zhihong Liu, Hoang Anh Just, Xiangyu Chang, Xi Chen, Ruoxi Jia

Data valuation -- quantifying the contribution of individual data sources to certain predictive behaviors of a model -- is of great importance to enhancing the transparency of machine learning and designing incentive systems for data sharing.

counterfactual Data Valuation +1

Learning Personalized Brain Functional Connectivity of MDD Patients from Multiple Sites via Federated Bayesian Networks

no code implementations6 Jan 2023 Shuai Liu, Xiao Guo, Shun Qi, Huaning Wang, Xiangyu Chang

In particular, we derive a closed-form expression for the local update step and use the iterative proximal projection method to deal with the group fused lasso penalty in the global update step.

Federated Learning

Variance reduced Shapley value estimation for trustworthy data valuation

no code implementations30 Oct 2022 Mengmeng Wu, Ruoxi Jia, Changle lin, Wei Huang, Xiangyu Chang

Data valuation, especially quantifying data value in algorithmic prediction and decision-making, is a fundamental problem in data trading scenarios.

Data Valuation Decision Making

Learning Multitask Gaussian Bayesian Networks

no code implementations11 May 2022 Shuai Liu, Yixuan Qiu, Baojuan Li, Huaning Wang, Xiangyu Chang

We consider the problem of identifying alterations of brain functional connectivity for a single MDD patient.

Toward a Fairness-Aware Scoring System for Algorithmic Decision-Making

no code implementations21 Sep 2021 Yi Yang, Ying Wu, Mei Li, Xiangyu Chang, Yong Tan

Then, we transform the social welfare maximization problem into the risk minimization task in machine learning, and derive a fairness-aware scoring system with the help of mixed integer programming.

Decision Making Fairness +1

FedPower: Privacy-Preserving Distributed Eigenspace Estimation

no code implementations1 Mar 2021 Xiao Guo, Xiang Li, Xiangyu Chang, Shusen Wang, Zhihua Zhang

The low communication power and the possible privacy breaches of data make the computation of eigenspace challenging.

BIG-bench Machine Learning Dimensionality Reduction +2

Kernel Interpolation of High Dimensional Scattered Data

no code implementations3 Sep 2020 Shao-Bo Lin, Xiangyu Chang, Xingping Sun

Data sites selected from modeling high-dimensional problems often appear scattered in non-paternalistic ways.

Clustering Vocal Bursts Intensity Prediction

Randomized spectral co-clustering for large-scale directed networks

no code implementations25 Apr 2020 Xiao Guo, Yixuan Qiu, Hai Zhang, Xiangyu Chang

Directed networks are broadly used to represent asymmetric relationships among units.

Clustering

Uncertainty Quantification for Demand Prediction in Contextual Dynamic Pricing

no code implementations16 Mar 2020 Yining Wang, Xi Chen, Xiangyu Chang, Dongdong Ge

In this paper, using the problem of demand function prediction in dynamic pricing as the motivating example, we study the problem of constructing accurate confidence intervals for the demand function.

Management Uncertainty Quantification +1

Angle-Based Cost-Sensitive Multicategory Classification

no code implementations8 Mar 2020 Yi Yang, Yuxuan Guo, Xiangyu Chang

To show the usefulness of the framework, two cost-sensitive multicategory boosting algorithms are derived as concrete instances.

Classification General Classification

Randomized Spectral Clustering in Large-Scale Stochastic Block Models

no code implementations20 Jan 2020 Hai Zhang, Xiao Guo, Xiangyu Chang

In this paper, we study the spectral clustering using randomized sketching algorithms from a statistical perspective, where we typically assume the network data are generated from a stochastic block model that is not necessarily of full rank.

Clustering Community Detection +1

Adaptive Stopping Rule for Kernel-based Gradient Descent Algorithms

no code implementations9 Jan 2020 Xiangyu Chang, Shao-Bo Lin

In this paper, we propose an adaptive stopping rule for kernel-based gradient descent (KGD) algorithms.

Learning Theory

Predicting Depression Severity by Multi-Modal Feature Engineering and Fusion

no code implementations29 Nov 2017 Aven Samareh, Yan Jin, Zhangyang Wang, Xiangyu Chang, Shuai Huang

We present our preliminary work to determine if patient's vocal acoustic, linguistic, and facial patterns could predict clinical ratings of depression severity, namely Patient Health Questionnaire depression scale (PHQ-8).

Feature Engineering

Learning rates for classification with Gaussian kernels

no code implementations28 Feb 2017 Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang

This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss.

Binary Classification Classification +2

Divide and Conquer Local Average Regression

no code implementations23 Jan 2016 Xiangyu Chang, Shao-Bo Lin, Yao Wang

After theoretically analyzing the pros and cons, we find that although the divide and conquer local average regression can reach the optimal learning rate, the restric- tion to the number of data blocks is a bit strong, which makes it only feasible for small number of data blocks.

regression

Sparse K-Means with $\ell_{\infty}/\ell_0$ Penalty for High-Dimensional Data Clustering

no code implementations31 Mar 2014 Xiangyu Chang, Yu Wang, Rongjian Li, Zongben Xu

Nevertheless, this framework has two serious drawbacks: One is that the solution of the framework unavoidably involves a considerable portion of redundant noise features in many situations, and the other is that the framework neither offers intuitive explanations on why this framework can select relevant features nor leads to any theoretical guarantee for feature selection consistency.

Clustering feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.