Search Results for author: Yinchuan Li

Found 29 papers, 5 papers with code

Does Combining Parameter-efficient Modules Improve Few-shot Transfer Accuracy?

no code implementations23 Feb 2024 Nader Asadi, Mahdi Beitollahi, Yasser Khalil, Yinchuan Li, Guojun Zhang, Xi Chen

Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks.

Device Activity Detection and Channel Estimation for Millimeter-Wave Massive MIMO

no code implementations7 Feb 2024 Yinchuan Li, Yuancheng Zhan, Le Zheng, Xiaodong Wang

Different from traditional compressed sensing (CS) methods that only use the sparsity of user activities, we develop several Approximate Message Passing (AMP) based CS algorithms by exploiting the sparsity of user activities and mmWave channels.

Action Detection Activity Detection +1

Teach Large Language Models to Forget Privacy

no code implementations30 Dec 2023 Ran Yan, YuJun Li, Wenqian Li, Peihua Mai, Yan Pang, Yinchuan Li

Large Language Models (LLMs) have proven powerful, but the risk of privacy leakage remains a significant concern.

Privacy Preserving Zero-shot Generalization

A Theory of Non-Acyclic Generative Flow Networks

no code implementations23 Dec 2023 Leo Maxime Brunswic, Yinchuan Li, Yushun Xu, Shangling Jui, Lizhuang Ma

GFlowNets is a novel flow-based method for learning a stochastic policy to generate objects via a sequence of actions and with probability proportional to a given positive reward.

Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse

no code implementations28 Jun 2023 Didi Zhu, Zexi Li, Min Zhang, Junkun Yuan, Yunfeng Shao, Jiashuo Liu, Kun Kuang, Yinchuan Li, Chao Wu

It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings.

Meta Generative Flow Networks with Personalization for Task-Specific Adaptation

no code implementations16 Jun 2023 Xinyuan Ji, Xu Zhang, Wei Xi, Haozhi Wang, Olga Gadyatskaya, Yinchuan Li

Multi-task reinforcement learning and meta-reinforcement learning have been developed to quickly adapt to new tasks, but they tend to focus on tasks with higher rewards and more frequent occurrences, leading to poor performance on tasks with sparse rewards.

Meta-Learning Meta Reinforcement Learning +1

GFlowNets with Human Feedback

no code implementations11 May 2023 Yinchuan Li, Shuang Luo, Yunfeng Shao, Jianye Hao

We propose the GFlowNets with Human Feedback (GFlowHF) framework to improve the exploration ability when training AI models.

Generalized Universal Domain Adaptation with Generative Flow Networks

no code implementations8 May 2023 Didi Zhu, Yinchuan Li, Yunfeng Shao, Jianye Hao, Fei Wu, Kun Kuang, Jun Xiao, Chao Wu

We introduce a new problem in unsupervised domain adaptation, termed as Generalized Universal Domain Adaptation (GUDA), which aims to achieve precise prediction of all target labels including unknown categories.

Universal Domain Adaptation Unsupervised Domain Adaptation

Generative Flow Networks for Precise Reward-Oriented Active Learning on Graphs

no code implementations24 Apr 2023 Yinchuan Li, Zhigang Li, Wenqian Li, Yunfeng Shao, Yan Zheng, Jianye Hao

Many score-based active learning methods have been successfully applied to graph-structured data, aiming to reduce the number of labels and achieve better performance of graph neural networks based on predefined score functions.

Active Learning

Multi-agent Policy Reciprocity with Theoretical Guarantee

no code implementations12 Apr 2023 Haozhi Wang, Yinchuan Li, Qing Wang, Yunfeng Shao, Jianye Hao

We then define an adjacency space for mismatched states and design a plug-and-play module for value iteration, which enables agents to infer more precise returns.

Continuous Control Multi-agent Reinforcement Learning +1

Federated Learning via Variational Bayesian Inference: Personalization, Sparsity and Clustering

no code implementations8 Mar 2023 Xu Zhang, Wenpeng Li, Yunfeng Shao, Yinchuan Li

data, we propose a clustered Bayesian FL model named cFedbayes by learning different prior distributions for different clients.

Bayesian Inference Clustering +1

DAG Matters! GFlowNets Enhanced Explainer For Graph Neural Networks

1 code implementation4 Mar 2023 Wenqian Li, Yinchuan Li, Zhigang Li, Jianye Hao, Yan Pang

Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over the years.

Combinatorial Optimization

CFlowNets: Continuous Control with Generative Flow Networks

no code implementations4 Mar 2023 Yinchuan Li, Shuang Luo, Haozhi Wang, Jianye Hao

Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks.

Active Learning Continuous Control +2

Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again

no code implementations10 Oct 2022 Xin-Chun Li, Wen-Shu Fan, Shaoming Song, Yinchuan Li, Bingshuai Li, Yunfeng Shao, De-Chuan Zhan

Complex teachers tend to be over-confident and traditional temperature scaling limits the efficacy of {\it class discriminability}, resulting in less discriminative wrong class probabilities.

Knowledge Distillation

On the Convergence Theory of Meta Reinforcement Learning with Personalized Policies

no code implementations21 Sep 2022 Haozhi Wang, Qing Wang, Yunfeng Shao, Dong Li, Jianye Hao, Yinchuan Li

Modern meta-reinforcement learning (Meta-RL) methods are mainly developed based on model-agnostic meta-learning, which performs policy gradient steps across tasks to maximize policy performance.

Continuous Control Meta-Learning +3

Tensor Decomposition based Personalized Federated Learning

no code implementations27 Aug 2022 Qing Wang, Jing Jin, Xiaofeng Liu, Huixuan Zong, Yunfeng Shao, Yinchuan Li

Federated learning (FL) is a new distributed machine learning framework that can achieve reliably collaborative training without collecting users' private data.

Model Optimization Personalized Federated Learning +1

S2RL: Do We Really Need to Perceive All States in Deep Multi-Agent Reinforcement Learning?

no code implementations20 Jun 2022 Shuang Luo, Yinchuan Li, Jiahui Li, Kun Kuang, Furui Liu, Yunfeng Shao, Chao Wu

To this end, we propose a sparse state based MARL (S2RL) framework, which utilizes a sparse attention mechanism to discard irrelevant information in local observations.

Multi-agent Reinforcement Learning Reinforcement Learning (RL) +2

Personalized Federated Learning via Variational Bayesian Inference

1 code implementation16 Jun 2022 Xu Zhang, Yinchuan Li, Wenpeng Li, Kaiyang Guo, Yunfeng Shao

Federated learning faces huge challenges from model overfitting due to the lack of data and statistical diversity among clients.

Bayesian Inference Personalized Federated Learning +1

Sparse Federated Learning with Hierarchical Personalized Models

no code implementations25 Mar 2022 Xiaofeng Liu, Qing Wang, Yunfeng Shao, Yinchuan Li

To this end, we propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP), which significantly improves the global model performance facing diverse data.

Autonomous Vehicles Federated Learning

Towards Effective Clustered Federated Learning: A Peer-to-peer Framework with Adaptive Neighbor Matching

no code implementations23 Mar 2022 Zexi Li, Jiaxun Lu, Shuang Luo, Didi Zhu, Yunfeng Shao, Yinchuan Li, Zhimeng Zhang, Yongheng Wang, Chao Wu

In the literature, centralized clustered FL algorithms require the assumption of the number of clusters and hence are not effective enough to explore the latent relationships among clients.

Federated Learning

Sparse Personalized Federated Learning

no code implementations12 Jul 2021 Xiaofeng Liu, Yinchuan Li, Qing Wang, Xu Zhang, Yunfeng Shao, Yanhui Geng

By incorporating an approximated L1-norm and the correlation between client models and global model into standard FL loss function, the performance on statistical diversity data is improved and the communicational and computational loads required in the network are reduced compared with non-sparse FL.

Personalized Federated Learning

Structured Directional Pruning via Perturbation Orthogonal Projection

no code implementations12 Jul 2021 Yinchuan Li, Xiaofeng Liu, Yunfeng Shao, Qing Wang, Yanhui Geng

Structured pruning is an effective compression technique to reduce the computation of neural networks, which is usually achieved by adding perturbations to reduce network parameters at the cost of slightly increasing training loss.

ADMM-Net for Communication Interference Removal in Stepped-Frequency Radar

no code implementations26 Sep 2020 Jeremy Johnston, Yinchuan Li, Marco Lops, Xiaodong Wang

Complex ADMM-Net, a complex-valued neural network architecture inspired by the alternating direction method of multipliers (ADMM), is designed for interference removal in super-resolution stepped frequency radar angle-range-doppler imaging.

Super-Resolution

DP-LSTM: Differential Privacy-inspired LSTM for Stock Prediction Using Financial News

4 code implementations20 Dec 2019 Xinyi Li, Yinchuan Li, Hongyang Yang, Liuqing Yang, Xiao-Yang Liu

In this paper, we propose a novel deep neural network DP-LSTM for stock price prediction, which incorporates the news articles as hidden information and integrates difference news sources through the differential privacy mechanism.

Stock Prediction Stock Price Prediction

Risk Management via Anomaly Circumvent: Mnemonic Deep Learning for Midterm Stock Prediction

no code implementations3 Aug 2019 Xinyi Li, Yinchuan Li, Xiao-Yang Liu, Christina Dan Wang

In this paper, we propose a novel deep neural network Mid-LSTM for midterm stock prediction, which incorporates the market trend as hidden states.

Management Stock Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.