Search Results for author: Tao Lin

Found 59 papers, 20 papers with code

Open-Source AI-based SE Tools: Opportunities and Challenges of Collaborative Software Learning

no code implementations9 Apr 2024 ZhiHao Lin, Wei Ma, Tao Lin, Yaowen Zheng, Jingquan Ge, Jun Wang, Jacques Klein, Tegawende Bissyande, Yang Liu, Li Li

We introduce a governance framework centered on federated learning (FL), designed to foster the joint development and maintenance of open-source AI code models while safeguarding data privacy and security.

Federated Learning

DeFT: Flash Tree-attention with IO-Awareness for Efficient Tree-search-based LLM Inference

no code implementations30 Mar 2024 Jinwei Yao, Kaiqi Chen, Kexun Zhang, Jiaxuan You, Binhang Yuan, Zeke Wang, Tao Lin

Decoding using tree search can greatly enhance the inference quality for transformer-based Large Language Models (LLMs).

Persuading a Learning Agent

no code implementations15 Feb 2024 Tao Lin, YiLing Chen

We study a repeated Bayesian persuasion problem (and more generally, any generalized principal-agent problem with complete information) where the principal does not have commitment power and the agent uses algorithms to learn to respond to the principal's signals.

Switch EMA: A Free Lunch for Better Flatness and Sharpness

2 code implementations14 Feb 2024 Siyuan Li, Zicheng Liu, Juanxi Tian, Ge Wang, Zedong Wang, Weiyang Jin, Di wu, Cheng Tan, Tao Lin, Yang Liu, Baigui Sun, Stan Z. Li

Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization.

Attribute Image Classification +7

Multi-Sender Persuasion -- A Computational Perspective

no code implementations7 Feb 2024 Safwan Hossain, Tonghan Wang, Tao Lin, YiLing Chen, David C. Parkes, Haifeng Xu

The core solution concept here is the Nash equilibrium of senders' signaling policies.

Training-time Neuron Alignment through Permutation Subspace for Improving Linear Mode Connectivity and Model Fusion

no code implementations2 Feb 2024 Zexi Li, Zhiqi Li, Jie Lin, Tao Shen, Tao Lin, Chao Wu

In deep learning, stochastic gradient descent often yields functionally similar yet widely scattered solutions in the weight space even under the same initialization, causing barriers in the Linear Mode Connectivity (LMC) landscape.

Federated Learning Linear Mode Connectivity

Federated Unlearning: a Perspective of Stability and Fairness

no code implementations2 Feb 2024 Jiaqi Shao, Tao Lin, Xuanyu Cao, Bing Luo

Our key contribution lies in a comprehensive theoretical analysis of the trade-offs in FU and provides insights into data heterogeneity's impacts on FU.

Fairness

PathMMU: A Massive Multimodal Expert-Level Benchmark for Understanding and Reasoning in Pathology

no code implementations29 Jan 2024 Yuxuan Sun, Hao Wu, Chenglu Zhu, Sunyi Zheng, Qizi Chen, Kai Zhang, Yunlong Zhang, Dan Wan, Xiaoxiao Lan, Mengyue Zheng, Jingxiong Li, Xinheng Lyu, Tao Lin, Lin Yang

To address this, we introduce PathMMU, the largest and highest-quality expert-validated pathology benchmark for Large Multimodal Models (LMMs).

Learning Thresholds with Latent Values and Censored Feedback

no code implementations7 Dec 2023 Jiahao Zhang, Tao Lin, Weiqiang Zheng, Zhe Feng, Yifeng Teng, Xiaotie Deng

In this paper, we investigate a problem of actively learning threshold in latent space, where the unknown reward $g(\gamma, v)$ depends on the proposed threshold $\gamma$ and latent value $v$ and it can be $only$ achieved if the threshold is lower than or equal to the unknown latent value.

On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm

2 code implementations6 Dec 2023 Peng Sun, Bei Shi, Daiwei Yu, Tao Lin

Contemporary machine learning requires training large neural networks on massive datasets and thus faces the challenges of high computational demands.

Towards Robust Multi-Modal Reasoning via Model Selection

1 code implementation12 Oct 2023 Xiangyan Liu, Rongxue Li, Wei Ji, Tao Lin

The reasoning capabilities of LLM (Large Language Model) are widely acknowledged in recent research, inspiring studies on tool learning and autonomous agents.

Language Modelling Large Language Model +1

Find Your Optimal Assignments On-the-fly: A Holistic Framework for Clustered Federated Learning

no code implementations9 Oct 2023 Yongxin Guo, Xiaoying Tang, Tao Lin

To this end, this paper presents a comprehensive investigation into current clustered FL methods and proposes a four-tier framework, namely HCFL, to encompass and extend existing approaches.

Clustering Federated Learning

Revisiting Implicit Models: Sparsity Trade-offs Capability in Weight-tied Model for Vision Tasks

no code implementations16 Jul 2023 Haobo Song, Soumajit Majumder, Tao Lin

Implicit models such as Deep Equilibrium Models (DEQs) have garnered significant attention in the community for their ability to train infinite layer models with elegant solution-finding procedures and constant memory footprint.

Benchmarking

On Pitfalls of Test-Time Adaptation

1 code implementation6 Jun 2023 Hao Zhao, Yuejiang Liu, Alexandre Alahi, Tao Lin

Test-Time Adaptation (TTA) has recently emerged as a promising approach for tackling the robustness challenge under distribution shifts.

Model Selection Test-time Adaptation

Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging

no code implementations1 Jun 2023 Jiamian Wang, Zongliang Wu, Yulun Zhang, Xin Yuan, Tao Lin, Zhiqiang Tao

In this work, we tackle this challenge by marrying prompt tuning with FL to snapshot compressive imaging for the first time and propose an federated hardware-prompt learning (FedHP) method.

Federated Learning

Prediction with Incomplete Data under Agnostic Mask Distribution Shift

no code implementations18 May 2023 Yichen Zhu, Jian Yuan, Bo Jiang, Tao Lin, Haiming Jin, Xinbing Wang, Chenghu Zhou

We focus on the case where the underlying joint distribution of complete features and label is invariant, but the missing pattern, i. e., mask distribution may shift agnostically between training and testing.

No Fear of Classifier Biases: Neural Collapse Inspired Federated Learning with Synthetic and Fixed Classifier

1 code implementation ICCV 2023 Zexi Li, Xinyi Shang, Rui He, Tao Lin, Chao Wu

Recent advances in neural collapse have shown that the classifiers and feature prototypes under perfect training scenarios collapse into an optimal structure called simplex equiangular tight frame (ETF).

Classifier calibration Federated Learning

Revisiting Weighted Aggregation in Federated Learning with Neural Networks

1 code implementation14 Feb 2023 Zexi Li, Tao Lin, Xinyi Shang, Chao Wu

In federated learning (FL), weighted aggregation of local models is conducted to generate a global model, and the aggregation weights are normalized (the sum of weights is 1) and proportional to the local data sizes.

Federated Learning

Persuading a Behavioral Agent: Approximately Best Responding and Learning

no code implementations7 Feb 2023 YiLing Chen, Tao Lin

We show that, under natural assumptions, (1) the sender can find a signaling scheme that guarantees itself an expected utility almost as good as its optimal utility in the classic model, no matter what approximately best-responding strategy the receiver uses; (2) on the other hand, there is no signaling scheme that gives the sender much more utility than its optimal utility in the classic model, even if the receiver uses the approximately best-responding strategy that is best for the sender.

FedRC: Tackling Diverse Distribution Shifts Challenge in Federated Learning by Robust Clustering

no code implementations29 Jan 2023 Yongxin Guo, Xiaoying Tang, Tao Lin

In this paper, we identify the learning challenges posed by the simultaneous occurrence of diverse distribution shifts and propose a clustering principle to overcome these challenges.

Clustering Federated Learning

Decentralized Gradient Tracking with Local Steps

no code implementations3 Jan 2023 Yue Liu, Tao Lin, Anastasia Koloskova, Sebastian U. Stich

Gradient tracking (GT) is an algorithm designed for solving decentralized optimization problems over a network (such as training a machine learning model).

How Does Independence Help Generalization? Sample Complexity of ERM on Product Distributions

no code implementations13 Dec 2022 Tao Lin

While many classical notions of learnability (e. g., PAC learnability) are distribution-free, utilizing the specific structures of an input distribution may improve learning performance.

FedBR: Improving Federated Learning on Heterogeneous Data via Local Learning Bias Reduction

1 code implementation26 May 2022 Yongxin Guo, Xiaoying Tang, Tao Lin

As a remedy, we propose FedBR, a novel unified algorithm that reduces the local learning bias on features and classifiers to tackle these challenges.

Domain Generalization Federated Learning

Test-Time Robust Personalization for Federated Learning

1 code implementation22 May 2022 Liangze Jiang, Tao Lin

Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.

Federated Learning

Adversarial Training for High-Stakes Reliability

no code implementations3 May 2022 Daniel M. Ziegler, Seraphina Nix, Lawrence Chan, Tim Bauman, Peter Schmidt-Nielsen, Tao Lin, Adam Scherlis, Noa Nabeshima, Ben Weinstein-Raun, Daniel de Haas, Buck Shlegeris, Nate Thomas

We found that adversarial training increased robustness to the adversarial attacks that we trained on -- doubling the time for our contractors to find adversarial examples both with our tool (from 13 to 26 minutes) and without (from 20 to 44 minutes) -- without affecting in-distribution performance.

Text Generation Vocal Bursts Intensity Prediction

Learning Disentangled Behaviour Patterns for Wearable-based Human Activity Recognition

1 code implementation15 Feb 2022 Jie Su, Zhenyu Wen, Tao Lin, Yu Guan

To address this issue, in this work, we proposed a Behaviour Pattern Disentanglement (BPD) framework, which can disentangle the behavior patterns from the irrelevant noises such as personal styles or environmental noises, etc.

Disentanglement Human Activity Recognition

An Improved Analysis of Gradient Tracking for Decentralized Machine Learning

no code implementations NeurIPS 2021 Anastasia Koloskova, Tao Lin, Sebastian U. Stich

We consider decentralized machine learning over a network where the training data is distributed across $n$ agents, each of which can compute stochastic model updates on their local data.

BIG-bench Machine Learning

Towards Federated Learning on Time-Evolving Heterogeneous Data

no code implementations25 Dec 2021 Yongxin Guo, Tao Lin, Xiaoying Tang

Federated Learning (FL) is a learning paradigm that protects privacy by keeping client data on edge devices.

Federated Learning

Learning by Active Forgetting for Neural Networks

no code implementations21 Nov 2021 Jian Peng, Xian Sun, Min Deng, Chao Tao, Bo Tang, Wenbo Li, Guohua Wu, QingZhu, Yu Liu, Tao Lin, Haifeng Li

This paper presents a learning model by active forgetting mechanism with artificial neural networks.

RelaySum for Decentralized Deep Learning on Heterogeneous Data

1 code implementation NeurIPS 2021 Thijs Vogels, Lie He, Anastasia Koloskova, Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.

Nash Convergence of Mean-Based Learning Algorithms in First Price Auctions

1 code implementation8 Oct 2021 Xiaotie Deng, Xinyan Hu, Tao Lin, Weiqiang Zheng

Specifically, the results depend on the number of bidders with the highest value: - If the number is at least three, the bidding dynamics almost surely converges to a Nash equilibrium of the auction, both in time-average and in last-iterate.

Representation Memorization for Fast Learning New Knowledge without Forgetting

no code implementations28 Aug 2021 Fei Mi, Tao Lin, Boi Faltings

In this paper, we consider scenarios that require learning new classes or data distributions quickly and incrementally over time, as it often occurs in real-world dynamic environments.

Image Classification Language Modelling +1

The Optimal Size of an Epistemic Congress

no code implementations2 Jul 2021 Manon Revel, Tao Lin, Daniel Halpern

We analyze the optimal size of a congress in a representative democracy.

Deep Learning for IoT

no code implementations12 Apr 2021 Tao Lin

Besides, this paper presents a research on data retrieval solution to avoid hacking by adversaries in the fields of adversary machine leaning.

BIG-bench Machine Learning Retrieval

Consensus Control for Decentralized Deep Learning

no code implementations9 Feb 2021 Lingjing Kong, Tao Lin, Anastasia Koloskova, Martin Jaggi, Sebastian U. Stich

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.

Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data

1 code implementation9 Feb 2021 Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity.

On the Effect of Consensus in Decentralized Deep Learning

no code implementations1 Jan 2021 Tao Lin, Lingjing Kong, Anastasia Koloskova, Martin Jaggi, Sebastian U Stich

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.

A Game-Theoretic Analysis of the Empirical Revenue Maximization Algorithm with Endogenous Sampling

no code implementations NeurIPS 2020 Xiaotie Deng, Ron Lavi, Tao Lin, Qi Qi, Wenwei Wang, Xiang Yan

The Empirical Revenue Maximization (ERM) is one of the most important price learning algorithms in auction design: as the literature shows it can learn approximately optimal reserve prices for revenue-maximizing auctioneers in both repeated auctions and uniform-price auctions.

XCM: An Explainable Convolutional Neural Network for Multivariate Time Series Classification

2 code implementations10 Sep 2020 Kevin Fauvel, Tao Lin, Véronique Masson, Élisa Fromont, Alexandre Termier

Then, we illustrate how XCM reconciles performance and explainability on a synthetic dataset and show that XCM enables a more precise identification of the regions of the input data that are important for predictions compared to the current deep learning MTS classifier also providing faithful explainability.

General Classification Time Series +2

Learning Utilities and Equilibria in Non-Truthful Auctions

no code implementations NeurIPS 2020 Hu Fu, Tao Lin

In non-truthful auctions, agents' utility for a strategy depends on the strategies of the opponents and also the prior distribution over their private types; the set of Bayes Nash equilibria generally has an intricate dependence on the prior.

Ensemble Distillation for Robust Model Fusion in Federated Learning

1 code implementation NeurIPS 2020 Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi

In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.

BIG-bench Machine Learning Federated Learning +1

Extrapolation for Large-batch Training in Deep Learning

no code implementations ICML 2020 Tao Lin, Lingjing Kong, Sebastian U. Stich, Martin Jaggi

Deep learning networks are typically trained by Stochastic Gradient Descent (SGD) methods that iteratively improve the model parameters by estimating a gradient on a very small fraction of the training data.

Masking as an Efficient Alternative to Finetuning for Pretrained Language Models

no code implementations EMNLP 2020 Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, Hinrich Schütze

We present an efficient method of utilizing pretrained language models, where we learn selective binary masks for pretrained weights in lieu of modifying them through finetuning.

Deep Collaborative Embedding for information cascade prediction

no code implementations18 Jan 2020 Yuhui Zhao, Ning Yang, Tao Lin, Philip S. Yu

First, the existing works often assume an underlying information diffusion model, which is impractical in real world due to the complexity of information diffusion.

Overcoming Long-term Catastrophic Forgetting through Adversarial Neural Pruning and Synaptic Consolidation

1 code implementation19 Dec 2019 Jian Peng, Bo Tang, Hao Jiang, Zhuo Li, Yinjie Lei, Tao Lin, Haifeng Li

It is due to two facts: first, as the model learns more tasks, the intersection of the low-error parameter subspace satisfying for these tasks becomes smaller or even does not exist; second, when the model learns a new task, the cumulative error keeps increasing as the model tries to protect the parameter configuration of previous tasks from interference.

Image Classification

Decentralized Deep Learning with Arbitrary Communication Compression

1 code implementation ICLR 2020 Anastasia Koloskova, Tao Lin, Sebastian U. Stich, Martin Jaggi

Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks, as well as for efficient scaling to large compute clusters.

Exploring Interpretable LSTM Neural Networks over Multi-Variable Data

3 code implementations28 May 2019 Tian Guo, Tao Lin, Nino Antulov-Fantulin

In this paper, we explore the structure of LSTM recurrent neural networks to learn variable-wise hidden states, with the aim to capture different dynamics in multi-variable time series and distinguish the contribution of variables to the prediction.

Time Series Time Series Analysis

T-GCN: A Temporal Graph ConvolutionalNetwork for Traffic Prediction

10 code implementations12 Nov 2018 Ling Zhao, Yujiao Song, Chao Zhang, Yu Liu, Pu Wang, Tao Lin, Min Deng, Haifeng Li

However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time, namely, spatial dependence and temporal dependence.

Management Traffic Prediction

Exploring the interpretability of LSTM neural networks over multi-variable data

no code implementations27 Sep 2018 Tian Guo, Tao Lin

In learning a predictive model over multivariate time series consisting of target and exogenous variables, the forecasting performance and interpretability of the model are both essential for deployment and uncovering knowledge behind the data.

Time Series Time Series Analysis

Don't Use Large Mini-Batches, Use Local SGD

2 code implementations ICLR 2020 Tao Lin, Sebastian U. Stich, Kumar Kshitij Patel, Martin Jaggi

Mini-batch stochastic gradient methods (SGD) are state of the art for distributed training of deep neural networks.

Multi-variable LSTM neural network for autoregressive exogenous model

no code implementations17 Jun 2018 Tian Guo, Tao Lin

In this paper, we propose multi-variable LSTM capable of accurate forecasting and variable importance interpretation for time series with exogenous variables.

Time Series Time Series Analysis

An interpretable LSTM neural network for autoregressive exogenous model

no code implementations14 Apr 2018 Tian Guo, Tao Lin, Yao Lu

In this paper, we propose an interpretable LSTM recurrent neural network, i. e., multi-variable LSTM for time series with exogenous variables.

Time Series Time Series Analysis

Training DNNs with Hybrid Block Floating Point

no code implementations NeurIPS 2018 Mario Drumond, Tao Lin, Martin Jaggi, Babak Falsafi

We identify block floating point (BFP) as a promising alternative representation since it exhibits wide dynamic range and enables the majority of DNN operations to be performed with fixed-point logic.

RubyStar: A Non-Task-Oriented Mixture Model Dialog System

no code implementations8 Nov 2017 Huiting Liu, Tao Lin, Hanfei Sun, Weijian Lin, Chih-Wei Chang, Teng Zhong, Alexander Rudnicky

RubyStar is a dialog system designed to create "human-like" conversation by combining different response generation strategies.

Question Answering Response Generation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.