Search Results for author: Junshan Zhang

Found 31 papers, 4 papers with code

Warm-Start Actor-Critic: From Approximation Error to Sub-optimality Gap

no code implementations20 Jun 2023 Hang Wang, Sen Lin, Junshan Zhang

To this end, the primary objective of this work is to build a fundamental understanding on ``\textit{whether and when online learning can be significantly accelerated by a warm-start policy from offline RL?}''.

Offline RL Reinforcement Learning (RL)

Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error Feedback

no code implementations NeurIPS 2021 Hang Wang, Sen Lin, Junshan Zhang

It is known that the estimation bias hinges heavily on the ensemble size (i. e., the number of Q-function approximators used in the target), and that determining the `right' ensemble size is highly nontrivial, because of the time-varying nature of the function approximation errors during the learning process.

Q-Learning

Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing

no code implementations13 Mar 2023 Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, Deliang Fan

Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely self-supervised continual learning (SSCL).

Continual Learning Self-Supervised Learning

CLARE: Conservative Model-Based Reward Learning for Offline Inverse Reinforcement Learning

no code implementations9 Feb 2023 Sheng Yue, Guanbo Wang, Wei Shao, Zhaofeng Zhang, Sen Lin, Ju Ren, Junshan Zhang

This work aims to tackle a major challenge in offline Inverse Reinforcement Learning (IRL), namely the reward extrapolation error, where the learned reward function may fail to explain the task correctly and misguide the agent in unseen environments due to the intrinsic covariate shift.

Continuous Control reinforcement-learning +1

Algorithm Design for Online Meta-Learning with Task Boundary Detection

no code implementations2 Feb 2023 Daouda Sow, Sen Lin, Yingbin Liang, Junshan Zhang

More specifically, we first propose two simple but effective detection mechanisms of task switches and distribution shift based on empirical observations, which serve as a key building block for more elegant online model updates in our algorithm: the task switch detection mechanism allows reusing of the best model available for the current task at hand, and the distribution shift detection mechanism differentiates the meta model update in order to preserve the knowledge for in-distribution tasks and quickly learn the new knowledge for out-of-distribution tasks.

Boundary Detection Meta-Learning

HiFlash: Communication-Efficient Hierarchical Federated Learning with Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association

no code implementations16 Jan 2023 Qiong Wu, Xu Chen, Tao Ouyang, Zhi Zhou, Xiaoxi Zhang, Shusen Yang, Junshan Zhang

Federated learning (FL) is a promising paradigm that enables collaboratively learning a shared model across massive clients while keeping the training data locally.

Edge-computing Federated Learning

Semantic Communications for Wireless Sensing: RIS-aided Encoding and Self-supervised Decoding

no code implementations23 Nov 2022 Hongyang Du, Jiacheng Wang, Dusit Niyato, Jiawen Kang, Zehui Xiong, Junshan Zhang, Xuemin, Shen

To select the task-related signal spectrums for achieving efficient encoding, a semantic hash sampling method is introduced.

Self-Supervised Learning

Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer

no code implementations1 Nov 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

By learning a sequence of tasks continually, an agent in continual learning (CL) can improve the learning performance of both a new task and `old' tasks by leveraging the forward knowledge transfer and the backward knowledge transfer, respectively.

Continual Learning Transfer Learning

Attention-aware Resource Allocation and QoE Analysis for Metaverse xURLLC Services

2 code implementations10 Aug 2022 Hongyang Du, Jiazhen Liu, Dusit Niyato, Jiawen Kang, Zehui Xiong, Junshan Zhang, Dong In Kim

Although conventional ultra-reliable and low-latency communications (URLLC) can satisfy objective KPIs, it is difficult to provide a personalized immersive experience that is a distinctive feature of the Metaverse.

Collaboration in Participant-Centric Federated Learning: A Game-Theoretical Perspective

no code implementations25 Jul 2022 Guangjing Huang, Xu Chen, Tao Ouyang, Qian Ma, Lin Chen, Junshan Zhang

To coordinate the selfish and heterogeneous participants, we propose a novel analytic framework for incentivizing effective and efficient collaborations for participant-centric FL.

Federated Learning

Long-term Spatio-temporal Forecasting via Dynamic Multiple-Graph Attention

1 code implementation23 Apr 2022 Wei Shao, Zhiling Jin, Shuo Wang, Yufan Kang, Xiao Xiao, Hamid Menouar, Zhaofeng Zhang, Junshan Zhang, Flora Salim

To address these issues, we construct new graph models to represent the contextual information of each node and the long-term spatio-temporal data dependency structure.

Graph Attention Spatio-Temporal Forecasting

TRGP: Trust Region Gradient Projection for Continual Learning

1 code implementation ICLR 2022 Sen Lin, Li Yang, Deliang Fan, Junshan Zhang

To tackle this challenge, we propose Trust Region Gradient Projection (TRGP) for continual learning to facilitate the forward knowledge transfer based on an efficient characterization of task correlation.

Continual Learning Transfer Learning

Model-Based Offline Meta-Reinforcement Learning with Regularization

no code implementations ICLR 2022 Sen Lin, Jialin Wan, Tengyu Xu, Yingbin Liang, Junshan Zhang

In particular, we devise a new meta-Regularized model-based Actor-Critic (RAC) method for within-task policy optimization, as a key building block of MerPO, using conservative policy evaluation and regularized policy improvement; and the intrinsic tradeoff therein is achieved via striking the right balance between two regularizers, one based on the behavior policy and the other on the meta-policy.

Meta Reinforcement Learning reinforcement-learning +2

Communication-Efficient Distributed SGD with Compressed Sensing

no code implementations15 Dec 2021 Yujie Tang, Vikram Ramanathan, Junshan Zhang, Na Li

We consider large scale distributed optimization over a set of edge devices connected to a central server, where the limited communication bandwidth between the server and edge devices imposes a significant bottleneck for the optimization procedure.

Distributed Optimization Federated Learning

A New Look at AI-Driven NOMA-F-RANs: Features Extraction, Cooperative Caching, and Cache-Aided Computing

no code implementations2 Dec 2021 Zhong Yang, Yaru Fu, Yuanwei Liu, Yue Chen, Junshan Zhang

Non-orthogonal multiple access (NOMA) enabled fog radio access networks (NOMA-F-RANs) have been taken as a promising enabler to release network congestion, reduce delivery latency, and improve fog user equipments' (F-UEs') quality of services (QoS).

Edge-computing

Robust Event Classification Using Imperfect Real-world PMU Data

no code implementations19 Oct 2021 Yunchuan Liu, Lei Yang, Amir Ghasemkhani, Hanif Livani, Virgilio A. Centeno, Pin-Yu Chen, Junshan Zhang

Specifically, the data preprocessing step addresses the data quality issues of PMU measurements (e. g., bad data and missing data); in the fine-grained event data extraction step, a model-free event detection method is developed to accurately localize the events from the inaccurate event timestamps in the event logs; and the feature engineering step constructs the event features based on the patterns of different event types, in order to improve the performance and the interpretability of the event classifiers.

Classification Event Detection +1

GROWN: GRow Only When Necessary for Continual Learning

no code implementations3 Oct 2021 Li Yang, Sen Lin, Junshan Zhang, Deliang Fan

To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.

Continual Learning Transfer Learning

Distributed Learning with Strategic Users: A Repeated Game Approach

no code implementations NeurIPS 2021 Abdullah Basar Akbay, Junshan Zhang

We consider a distributed learning setting where strategic users are incentivized, by a cost-sensitive fusion center, to train a learning model based on local data.

Action Classification

Federated Learning over Wireless Networks: A Band-limited Coordinated Descent Approach

no code implementations16 Feb 2021 Junshan Zhang, Na Li, Mehmet Dedeoglu

We consider a many-to-one wireless architecture for federated learning at the network edge, where multiple edge devices collaboratively train a model using local data.

Federated Learning

Deep Reinforcement Learning with Spatio-temporal Traffic Forecasting for Data-Driven Base Station Sleep Control

no code implementations21 Jan 2021 Qiong Wu, Xu Chen, Zhi Zhou, Liang Chen, Junshan Zhang

To meet the ever increasing mobile traffic demand in 5G era, base stations (BSs) have been densely deployed in radio access networks (RANs) to increase the network coverage and capacity.

Distributed Q-Learning with State Tracking for Multi-agent Networked Control

no code implementations22 Dec 2020 Hang Wang, Sen Lin, Hamid Jafarkhani, Junshan Zhang

Specifically, we assume that agents maintain local estimates of the global state based on their local information and communications with neighbors.

Q-Learning

Inexact-ADMM Based Federated Meta-Learning for Fast and Continual Edge Learning

no code implementations16 Dec 2020 Sheng Yue, Ju Ren, Jiang Xin, Sen Lin, Junshan Zhang

To overcome these challenges, we explore continual edge learning capable of leveraging the knowledge transfer from previous tasks.

Meta-Learning Transfer Learning

Accelerating Distributed Online Meta-Learning via Multi-Agent Collaboration under Limited Communication

no code implementations15 Dec 2020 Sen Lin, Mehmet Dedeoglu, Junshan Zhang

By characterizing the upper bound of the agent-task-averaged regret, we show that the performance of multi-agent online meta-learning depends heavily on how much an agent can benefit from the distributed network-level OCO for meta-model updates via limited communication, which however is not well understood.

Meta-Learning

FedHome: Cloud-Edge based Personalized Federated Learning for In-Home Health Monitoring

1 code implementation14 Dec 2020 Qiong Wu, Xu Chen, Zhi Zhou, Junshan Zhang

In this paper, we propose FedHome, a novel cloud-edge based federated learning framework for in-home health monitoring, which learns a shared global model in the cloud from multiple homes at the network edges and achieves data privacy protection by keeping user data locally.

Human Activity Recognition Personalized Federated Learning

CoEdge: Cooperative DNN Inference with Adaptive Workload Partitioning over Heterogeneous Edge Devices

no code implementations6 Dec 2020 Liekang Zeng, Xu Chen, Zhi Zhou, Lei Yang, Junshan Zhang

CoEdge utilizes available computation and communication resources at the edge and dynamically partitions the DNN inference workload adaptive to devices' computing capabilities and network conditions.

MetaGater: Fast Learning of Conditional Channel Gated Networks via Federated Meta-Learning

no code implementations25 Nov 2020 Sen Lin, Li Yang, Zhezhi He, Deliang Fan, Junshan Zhang

In this work, we advocate a holistic approach to jointly train the backbone network and the channel gating which enables dynamical selection of a subset of filters for more efficient local computation given the data input.

Meta-Learning Quantization

System Identification via Meta-Learning in Linear Time-Varying Environments

no code implementations27 Oct 2020 Sen Lin, Hang Wang, Junshan Zhang

System identification is a fundamental problem in reinforcement learning, control theory and signal processing, and the non-asymptotic analysis of the corresponding sample complexity is challenging and elusive, even for linear time-varying (LTV) systems.

Meta-Learning

KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning

no code implementations CVPR 2021 Li Yang, Zhezhi He, Junshan Zhang, Deliang Fan

Thus motivated, we propose a new training method called \textit{kernel-wise Soft Mask} (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task, while using the same backbone model.

Continual Learning

Real-Time Edge Intelligence in the Making: A Collaborative Learning Framework via Federated Meta-Learning

no code implementations9 Jan 2020 Sen Lin, Guang Yang, Junshan Zhang

Further, we investigate the convergence of the proposed federated meta-learning algorithm under mild conditions on node similarity and the adaptation performance at the target edge.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.