no code implementations • 18 Mar 2024 • Yejia Liu, Shijin Duan, Xiaolin Xu, Shaolei Ren
To improve the accuracy of a small model, knowledge distillation is a popular method.
no code implementations • 6 Nov 2023 • Jieming Bian, Lei Wang, Shaolei Ren, Jie Xu
Training large-scale artificial intelligence (AI) models demands significant computational power and energy, leading to increased carbon footprint with potential environmental repercussions.
1 code implementation • 20 Jun 2023 • Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren
The results demonstrate that existing GLB approaches may amplify environmental inequity while our proposed equity-aware GLB can significantly reduce the regional disparity in terms of carbon and water footprints.
no code implementations • 16 Jun 2023 • Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren
This paper studies decentralized online convex optimization in a networked multi-agent system and proposes a novel algorithm, Learning-Augmented Decentralized Online optimization (LADO), for individual agents to select actions only based on local online information.
1 code implementation • 31 May 2023 • Pengfei Li, Jianyi Yang, Shaolei Ren
The key novelty of LOMAR is a new online switching operation which, based on a judicious condition to hedge against future uncertainties, decides whether to follow the expert's decision or the RL decision for each online item.
no code implementations • 1 May 2023 • Pengfei Li, Jianyi Yang, Shaolei Ren
In this paper, we propose a novel expert-robustified learning (ERL) approach, achieving {both} good average performance and robustness.
1 code implementation • 28 Apr 2023 • Tong Zhou, Yukui Luo, Shaolei Ren, Xiaolin Xu
In this work, we propose an active model IP protection scheme, namely NNSplitter, which actively protects the model by splitting it into two parts: the obfuscated model that performs poorly due to weight obfuscation, and the model secrets consisting of the indexes and original values of the obfuscated weights, which can only be accessed by authorized users with the support of the trusted execution environment.
1 code implementation • 6 Apr 2023 • Pengfei Li, Jianyi Yang, Mohammad A. Islam, Shaolei Ren
To respond to the global water challenges, AI models can, and also must, take social responsibility and lead by example by addressing their own water footprint.
1 code implementation • 23 Feb 2023 • Yejia Liu, Shijin Duan, Xiaolin Xu, Shaolei Ren
Fast model updates for unseen tasks on intelligent edge devices are crucial but also challenging due to the limited computational power.
no code implementations • 3 Dec 2022 • Jianyi Yang, Shaolei Ren
Online optimization with multiple budget constraints is challenging since the online decisions over a short time horizon are coupled together by strict inventory constraints.
1 code implementation • 16 Oct 2022 • Yejia Liu, Wang Zhu, Shaolei Ren
To provide an approximate solution to this problem in the online continual learning setting, we further propose the Global Pseudo-task Simulation (GPS), which mimics future catastrophic forgetting of the current task by permutation.
1 code implementation • 17 Aug 2022 • Tong Zhou, Shaolei Ren, Xiaolin Xu
Nonetheless, we observe that, with only extracting an obfuscated DNN architecture, the adversary can still retrain a substitute model with high performance (e. g., accuracy), rendering the obfuscation techniques ineffective.
no code implementations • 2 Jul 2022 • Jianyi Yang, Shaolei Ren
Based on the theoretical analysis, we propose a generalized informed training objective to better exploit the benefits of knowledge and balance the label and knowledge imperfectness, which is validated by the population risk bound.
no code implementations • 18 Apr 2022 • Pengfei Li, Jianyi Yang, Shaolei Ren
Nonetheless, by using the standard practice of training an ML model as a standalone optimizer and plugging it into an ML-augmented algorithm, the average cost performance can be highly unsatisfactory.
1 code implementation • 25 Mar 2022 • Bingqian Lu, Zheyu Yan, Yiyu Shi, Shaolei Ren
We first perform neural architecture search to obtain a small set of optimal architectures for one accelerator candidate.
1 code implementation • 18 Mar 2022 • Shijin Duan, Yejia Liu, Shaolei Ren, Xiaolin Xu
Thanks to the tiny storage and efficient execution, hyperdimensional Computing (HDC) is emerging as a lightweight learning framework on resource-constrained hardware.
1 code implementation • 9 Mar 2022 • Shijin Duan, Xiaolin Xu, Shaolei Ren
Nonetheless, they have two fundamental drawbacks, heuristic training process and ultra-high dimension, which result in sub-optimal inference accuracy and large model sizes beyond the capability of tiny devices with stringent resource constraints.
no code implementations • 20 Dec 2021 • Zhihui Shao, Jianyi Yang, Cong Shen, Shaolei Ren
Learning to optimize (L2O) has recently emerged as a promising approach to solving optimization problems by exploiting the strong prediction power of neural networks and offering lower runtime complexity than conventional solvers.
no code implementations • 11 Dec 2021 • Yang Bai, Lixing Chen, Shaolei Ren, Jie Xu
The core of our method is a DNN selection module that learns user QoE patterns on-the-fly and identifies the best-fit DNN for on-thing inference with the learned knowledge.
1 code implementation • 1 Nov 2021 • Bingqian Lu, Jianyi Yang, Weiwen Jiang, Yiyu Shi, Shaolei Ren
A key requirement of efficient hardware-aware NAS is the fast evaluation of inference latencies in order to rank different architectures.
Hardware Aware Neural Architecture Search Neural Architecture Search
no code implementations • 9 Feb 2021 • Jianyi Yang, Shaolei Ren
A standard assumption in contextual multi-arm bandit is that the true context is perfectly known before arm selection.
no code implementations • 3 Dec 2020 • Jing Dong, Tan Li, Shaolei Ren, Linqi Song
To further improve the performance of distributed Thompson Sampling, we propose a distributed Elimination based Thompson Sampling algorithm that allow the agents to learn collaboratively.
no code implementations • 17 Nov 2020 • Jianyi Yang, Shaolei Ren
With the exploding popularity of machine learning, domain knowledge in various forms has been playing a crucial role in improving the learning performance, especially when training data is limited.
no code implementations • 1 Sep 2020 • Bingqian Lu, Jianyi Yang, Shaolei Ren
In the first approach, we reuse the performance predictors built on a proxy device, and leverage the performance monotonicity to scale up the DNN optimization without re-building performance predictors for each different device.
no code implementations • 3 Jul 2020 • Zhihui Shao, Jianyi Yang, Shaolei Ren
In this paper, we address trustworthiness of DNNs by using post-hoc processing to monitor the true inference accuracy on a user's dataset.
Ranked #28 on Image Classification on STL-10
no code implementations • 16 Jun 2020 • Zhihui Shao, Jianyi Yang, Shaolei Ren
In this paper, we propose a new post-hoc confidence calibration method, called CCAC (Confidence Calibration with an Auxiliary Class), for DNN classifiers on OOD datasets.
no code implementations • 10 Jun 2020 • Fangfang Yang, Shaolei Ren
Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i. e., vectors with a dimensionality of thousands or even more) to represent features and to perform classification tasks.
no code implementations • 29 Feb 2020 • Luting Yang, Bingqian Lu, Shaolei Ren
Running deep neural network (DNN) inference on mobile devices, i. e., mobile inference, has become a growing trend, making inference less dependent on network connections and keeping private data locally.
no code implementations • 7 Oct 2018 • Lixing Chen, Jie Xu, Shaolei Ren, Pan Zhou
To solve this problem and optimize the edge computing performance, we propose SEEN, a Spatial-temporal Edge sErvice placemeNt algorithm.
no code implementations • 17 Mar 2017 • Jie Xu, Lixing Chen, Shaolei Ren
Mobile edge computing (a. k. a.