Search Results for author: Zhongxiang Dai

Found 25 papers, 13 papers with code

Outsourced Bayesian Optimization

no code implementations ICML 2020 Dmitrii Kharkovskii, Zhongxiang Dai, Bryan Kian Hsiang Low

This paper presents the outsourced-Gaussian process-upper confidence bound (O-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.

Bayesian Optimization Privacy Preserving

Robustifying and Boosting Training-Free Neural Architecture Search

1 code implementation12 Mar 2024 Zhenfeng He, Yao Shu, Zhongxiang Dai, Bryan Kian Hsiang Low

Nevertheless, the estimation ability of these metrics typically varies across different tasks, making it challenging to achieve robust and consistently good search performance on diverse tasks with only a single training-free metric.

Bayesian Optimization Neural Architecture Search

Localized Zeroth-Order Prompt Optimization

no code implementations5 Mar 2024 Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiangqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low

Existing methodologies usually prioritize a global optimization for finding the global optimum, which however will perform poorly in certain tasks.

Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers

1 code implementation2 Oct 2023 Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low

We perform instruction optimization for ChatGPT and use extensive experiments to show that our INSTINCT consistently outperforms the existing methods in different tasks, such as in various instruction induction tasks and the task of improving the zero-shot chain-of-thought instruction.

Bayesian Optimization Instruction Following

WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data

no code implementations1 Oct 2023 Jingtan Wang, Xinyang Lu, Zitong Zhao, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng, Bryan Kian Hsiang Low

The impressive performances of large language models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the intellectual property (IP) of their training data.

Language Modelling Large Language Model

Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients

1 code implementation8 Aug 2023 Yao Shu, Xiaoqiang Lin, Zhongxiang Dai, Bryan Kian Hsiang Low

To this end, we (a) introduce trajectory-informed gradient surrogates which is able to use the history of function queries during optimization for accurate and query-efficient gradient estimation, and (b) develop the technique of adaptive gradient correction using these gradient surrogates to mitigate the aforementioned disparity.

Adversarial Attack Federated Learning

Training-Free Neural Active Learning with Initialization-Robustness Guarantees

1 code implementation7 Jun 2023 Apivich Hemachandra, Zhongxiang Dai, Jasraj Singh, See-Kiong Ng, Bryan Kian Hsiang Low

To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness.

Active Learning Gaussian Processes

FedHQL: Federated Heterogeneous Q-Learning

no code implementations26 Jan 2023 Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan, Bryan Kian Hsiang Low, Roger Wattenhofer

Federated Reinforcement Learning (FedRL) encourages distributed agents to learn collectively from each other's experience to improve their performance without exchanging their raw trajectories.

Q-Learning reinforcement-learning +1

Sample-Then-Optimize Batch Neural Thompson Sampling

1 code implementation13 Oct 2022 Zhongxiang Dai, Yao Shu, Bryan Kian Hsiang Low, Patrick Jaillet

linear model), which is equivalently sampled from the GP posterior with the NTK as the kernel function.

AutoML Bayesian Optimization +1

Bayesian Optimization under Stochastic Delayed Feedback

1 code implementation19 Jun 2022 Arun Verma, Zhongxiang Dai, Bryan Kian Hsiang Low

The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay.

Bayesian Optimization

On Provably Robust Meta-Bayesian Optimization

1 code implementation14 Jun 2022 Zhongxiang Dai, Yizhou Chen, Haibin Yu, Bryan Kian Hsiang Low, Patrick Jaillet

We prove that both algorithms are asymptotically no-regret even when some or all previous tasks are dissimilar to the current task, and show that RM-GP-UCB enjoys a better theoretical robustness than RM-GP-TS.

Bayesian Optimization Meta-Learning +1

Federated Neural Bandits

1 code implementation28 May 2022 Zhongxiang Dai, Yao Shu, Arun Verma, Flint Xiaofeng Fan, Bryan Kian Hsiang Low, Patrick Jaillet

To better exploit the federated setting, FN-UCB adopts a weighted combination of two UCBs: $\text{UCB}^{a}$ allows every agent to additionally use the observations from the other agents to accelerate exploration (without sharing raw observations), while $\text{UCB}^{b}$ uses an NN with aggregated parameters for reward prediction in a similar way to federated averaging for supervised learning.

Multi-Armed Bandits

Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization

no code implementations10 May 2022 Shouri Hu, Haowei Wang, Zhongxiang Dai, Bryan Kian Hsiang Low, Szu Hui Ng

To adapt the EI for better performance under cumulative regret, we introduce a novel quantity called the evaluation cost which is compared against the acquisition function, and with this, develop the expected improvement-cost (EIC) algorithm.

Bayesian Optimization

Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search

1 code implementation24 Jan 2022 Yao Shu, Zhongxiang Dai, Zhaoxuan Wu, Bryan Kian Hsiang Low

As a consequence, (a) the relationships among these metrics are unclear, (b) there is no theoretical interpretation for their empirical performances, and (c) there may exist untapped potential in existing training-free NAS, which probably can be unveiled through a unified theoretical understanding.

Neural Architecture Search

Optimizing Conditional Value-At-Risk of Black-Box Functions

1 code implementation NeurIPS 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively.

Bayesian Optimization Thompson Sampling

Differentially Private Federated Bayesian Optimization with Distributed Exploration

no code implementations NeurIPS 2021 Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off.

Bayesian Optimization Federated Learning +1

Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee

2 code implementations NeurIPS 2021 Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan, Bryan Kian Hsiang Low

The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories.

Decision Making Federated Learning +2

Neural Ensemble Search via Bayesian Sampling

no code implementations6 Sep 2021 Yao Shu, Yizhou Chen, Zhongxiang Dai, Bryan Kian Hsiang Low

Unfortunately, these NAS algorithms aim to select only one single well-performing architecture from their search spaces and thus have overlooked the capability of neural network ensemble (i. e., an ensemble of neural networks with diverse architectures) in achieving improved performance over a single final selected architecture.

Adversarial Defense Neural Architecture Search

Value-at-Risk Optimization with Gaussian Processes

no code implementations13 May 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

Value-at-risk (VaR) is an established measure to assess risks in critical real-world applications with random environmental factors.

Gaussian Processes Portfolio Optimization

Private Outsourced Bayesian Optimization

no code implementations24 Oct 2020 Dmitrii Kharkovskii, Zhongxiang Dai, Bryan Kian Hsiang Low

This paper presents the private-outsourced-Gaussian process-upper confidence bound (PO-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.

Bayesian Optimization Privacy Preserving

R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games

no code implementations ICML 2020 Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet, Teck-Hua Ho

This paper presents a recursive reasoning formalism of Bayesian optimization (BO) to model the reasoning process in the interactions between boundedly rational, self-interested agents with unknown, complex, and costly-to-evaluate payoff functions in repeated games, which we call Recursive Reasoning-Based BO (R2-B2).

Bayesian Optimization Multi-agent Reinforcement Learning

Implicit Posterior Variational Inference for Deep Gaussian Processes

1 code implementation NeurIPS 2019 Haibin Yu, Yizhou Chen, Zhongxiang Dai, Kian Hsiang Low, Patrick Jaillet

This paper presents an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency.

Gaussian Processes Variational Inference

Bayesian Optimization with Binary Auxiliary Information

no code implementations17 Jun 2019 Yehong Zhang, Zhongxiang Dai, Kian Hsiang Low

This paper presents novel mixed-type Bayesian optimization (BO) algorithms to accelerate the optimization of a target objective function by exploiting correlated auxiliary information of binary type that can be more cheaply obtained, such as in policy search for reinforcement learning and hyperparameter tuning of machine learning models with early stopping.

Bayesian Optimization Vocal Bursts Type Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.