Search Results for author: Rui Kong

Found 6 papers, 3 papers with code

A Gray-Box Stability Analysis Mechanism for Power Electronic Converters

no code implementations15 Apr 2024 Rui Kong, Subham Sahoo, Yubo Song, Frede Blaabjerg

This paper proposes a gray-box stability analysis mechanism based on data-driven dynamic mode decomposition (DMD) for commercial grid-tied power electronics converters with limited information on its control parameters and topology.

Personal LLM Agents: Insights and Survey about the Capability, Efficiency and Security

2 code implementations10 Jan 2024 Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu, Wenxing Xu, Xiang Wang, Yi Sun, Rui Kong, Yile Wang, Hanfei Geng, Jian Luan, Xuefeng Jin, Zilong Ye, Guanjing Xiong, Fan Zhang, Xiang Li, Mengwei Xu, Zhijun Li, Peng Li, Yang Liu, Ya-Qin Zhang, Yunxin Liu

Next, we discuss several key challenges to achieve intelligent, efficient and secure Personal LLM Agents, followed by a comprehensive survey of representative solutions to address these challenges.

ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning

1 code implementation12 Sep 2023 Chen-Xiao Gao, Chenyang Wu, Mingjun Cao, Rui Kong, Zongzhang Zhang, Yang Yu

Third, we train an Advantage-Conditioned Transformer (ACT) to generate actions conditioned on the estimated advantages.

Action Generation

SwapMoE: Efficient Memory-Constrained Serving of Large Sparse MoE Models via Dynamic Expert Pruning and Swapping

no code implementations29 Aug 2023 Rui Kong, Yuanchun Li, Qingtian Feng, Weijun Wang, Linghe Kong, Yunxin Liu

The main idea of SwapMoE is to keep a small dynamic set of important experts, namely Virtual Experts, in the main memory for inference, while seamlessly maintaining how the Virtual Experts map to the actual experts.

object-detection Object Detection

PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification

1 code implementation22 Aug 2023 Yizhen Yuan, Rui Kong, Shenghao Xie, Yuanchun Li, Yunxin Liu

However, most backdoor attacks have to modify the neural network models through training with poisoned data and/or direct model editing, which leads to a common but false belief that backdoor attack can be easily avoided by properly protecting the model.

Backdoor Attack Model Editing

Cannot find the paper you are looking for? You can Submit a new open access paper.