Search Results for author: Bryan Kian Hsiang Low

Found 50 papers, 21 papers with code

Outsourced Bayesian Optimization

no code implementations ICML 2020 Dmitrii Kharkovskii, Zhongxiang Dai, Bryan Kian Hsiang Low

This paper presents the outsourced-Gaussian process-upper confidence bound (O-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.

Bayesian Optimization Privacy Preserving

Learning Task-Agnostic Embedding of Multiple Black-Box Experts for Multi-Task Model Fusion

no code implementations ICML 2020 Nghia Hoang, Thanh Lam, Bryan Kian Hsiang Low, Patrick Jaillet

The task-agnostic prototypes can then be reintegrated to generate a new model that solves a new task encoded with a different prototype distribution.

Knowledge Distillation

PINNACLE: PINN Adaptive ColLocation and Experimental points selection

3 code implementations11 Apr 2024 Gregory Kang Ruey Lau, Apivich Hemachandra, See-Kiong Ng, Bryan Kian Hsiang Low

Physics-Informed Neural Networks (PINNs), which incorporate PDEs as soft constraints, train with a composite loss function that contains multiple training point types: different types of collocation points chosen during training to enforce each PDE and initial/boundary conditions, and experimental points which are usually costly to obtain via experiments or simulations.

Transfer Learning

Robustifying and Boosting Training-Free Neural Architecture Search

1 code implementation12 Mar 2024 Zhenfeng He, Yao Shu, Zhongxiang Dai, Bryan Kian Hsiang Low

Nevertheless, the estimation ability of these metrics typically varies across different tasks, making it challenging to achieve robust and consistently good search performance on diverse tasks with only a single training-free metric.

Bayesian Optimization Neural Architecture Search

Localized Zeroth-Order Prompt Optimization

no code implementations5 Mar 2024 Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiangqiang Lin, Zhongxiang Dai, See-Kiong Ng, Bryan Kian Hsiang Low

Existing methodologies usually prioritize a global optimization for finding the global optimum, which however will perform poorly in certain tasks.

Decentralized Sum-of-Nonconvex Optimization

no code implementations4 Feb 2024 Zhuanghua Liu, Bryan Kian Hsiang Low

However, the convergence rate of the PMGT-SVRG algorithm has a linear dependency on the condition number, which is undesirable for the ill-conditioned problem.

Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates

no code implementations4 Feb 2024 Zhuanghua Liu, Luo Luo, Bryan Kian Hsiang Low

The recently proposed incremental quasi-Newton method is based on BFGS update and achieves a local superlinear convergence rate that is dependent on the condition number of the problem.

Understanding Domain Generalization: A Noise Robustness Perspective

1 code implementation26 Jan 2024 Rui Qiao, Bryan Kian Hsiang Low

Despite the rapid development of machine learning algorithms for domain generalization (DG), there is no clear empirical evidence that the existing DG algorithms outperform the classic empirical risk minimization (ERM) across standard benchmarks.

Domain Generalization

DeRDaVa: Deletion-Robust Data Valuation for Machine Learning

1 code implementation18 Dec 2023 Xiao Tian, Rachael Hwee Ling Sim, Jue Fan, Bryan Kian Hsiang Low

Data valuation is concerned with determining a fair valuation of data from data sources to compensate them or to identify training examples that are the most or least useful for predictions.

Data Valuation

Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers

1 code implementation2 Oct 2023 Xiaoqiang Lin, Zhaoxuan Wu, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet, Bryan Kian Hsiang Low

We perform instruction optimization for ChatGPT and use extensive experiments to show that our INSTINCT consistently outperforms the existing methods in different tasks, such as in various instruction induction tasks and the task of improving the zero-shot chain-of-thought instruction.

Bayesian Optimization Instruction Following

WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data

no code implementations1 Oct 2023 Jingtan Wang, Xinyang Lu, Zitong Zhao, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng, Bryan Kian Hsiang Low

The impressive performances of large language models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the intellectual property (IP) of their training data.

Language Modelling Large Language Model

Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients

1 code implementation8 Aug 2023 Yao Shu, Xiaoqiang Lin, Zhongxiang Dai, Bryan Kian Hsiang Low

To this end, we (a) introduce trajectory-informed gradient surrogates which is able to use the history of function queries during optimization for accurate and query-efficient gradient estimation, and (b) develop the technique of adaptive gradient correction using these gradient surrogates to mitigate the aforementioned disparity.

Adversarial Attack Federated Learning

Hessian-Aware Bayesian Optimization for Decision Making Systems

no code implementations1 Aug 2023 Mohit Rajpal, Lac Gia Tran, Yehong Zhang, Bryan Kian Hsiang Low

Derivative-free approaches such as Bayesian Optimization mitigate the dependency on the quality of gradient feedback, but are known to scale poorly in the high-dimension setting of complex decision making systems.

Bayesian Optimization Decision Making

Fair yet Asymptotically Equal Collaborative Learning

1 code implementation9 Jun 2023 Xiaoqiang Lin, Xinyi Xu, See-Kiong Ng, Chuan-Sheng Foo, Bryan Kian Hsiang Low

In collaborative learning with streaming data, nodes (e. g., organizations) jointly and continuously learn a machine learning (ML) model by sharing the latest model updates computed from their latest streaming data.

Fairness Incremental Learning

Training-Free Neural Active Learning with Initialization-Robustness Guarantees

1 code implementation7 Jun 2023 Apivich Hemachandra, Zhongxiang Dai, Jasraj Singh, See-Kiong Ng, Bryan Kian Hsiang Low

To this end, we introduce our expected variance with Gaussian processes (EV-GP) criterion for neural active learning, which is theoretically guaranteed to select data points which lead to trained NNs with both (a) good predictive performances and (b) initialization robustness.

Active Learning Gaussian Processes

Goat: Fine-tuned LLaMA Outperforms GPT-4 on Arithmetic Tasks

1 code implementation23 May 2023 Tiedong Liu, Bryan Kian Hsiang Low

We introduce Goat, a fine-tuned LLaMA model that significantly outperforms GPT-4 on a range of arithmetic tasks.

Attribute

Fine-tuning Language Models with Generative Adversarial Reward Modelling

no code implementations9 May 2023 Zhang Ze Yu, Lau Jia Jaw, Zhang Hui, Bryan Kian Hsiang Low

Reinforcement Learning with Human Feedback (RLHF) has been demonstrated to significantly enhance the performance of large language models (LLMs) by aligning their outputs with desired human values through instruction tuning.

reinforcement-learning

FedHQL: Federated Heterogeneous Q-Learning

no code implementations26 Jan 2023 Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan, Bryan Kian Hsiang Low, Roger Wattenhofer

Federated Reinforcement Learning (FedRL) encourages distributed agents to learn collectively from each other's experience to improve their performance without exchanging their raw trajectories.

Q-Learning reinforcement-learning +1

Sample-Then-Optimize Batch Neural Thompson Sampling

1 code implementation13 Oct 2022 Zhongxiang Dai, Yao Shu, Bryan Kian Hsiang Low, Patrick Jaillet

linear model), which is equivalently sampled from the GP posterior with the NTK as the kernel function.

AutoML Bayesian Optimization +1

Bayesian Optimization under Stochastic Delayed Feedback

1 code implementation19 Jun 2022 Arun Verma, Zhongxiang Dai, Bryan Kian Hsiang Low

The existing BO methods assume that the function evaluation (feedback) is available to the learner immediately or after a fixed delay.

Bayesian Optimization

On Provably Robust Meta-Bayesian Optimization

1 code implementation14 Jun 2022 Zhongxiang Dai, Yizhou Chen, Haibin Yu, Bryan Kian Hsiang Low, Patrick Jaillet

We prove that both algorithms are asymptotically no-regret even when some or all previous tasks are dissimilar to the current task, and show that RM-GP-UCB enjoys a better theoretical robustness than RM-GP-TS.

Bayesian Optimization Meta-Learning +1

Federated Neural Bandits

1 code implementation28 May 2022 Zhongxiang Dai, Yao Shu, Arun Verma, Flint Xiaofeng Fan, Bryan Kian Hsiang Low, Patrick Jaillet

To better exploit the federated setting, FN-UCB adopts a weighted combination of two UCBs: $\text{UCB}^{a}$ allows every agent to additionally use the observations from the other agents to accelerate exploration (without sharing raw observations), while $\text{UCB}^{b}$ uses an NN with aggregated parameters for reward prediction in a similar way to federated averaging for supervised learning.

Multi-Armed Bandits

On the Convergence of the Shapley Value in Parametric Bayesian Learning Games

1 code implementation16 May 2022 Lucas Agussurja, Xinyi Xu, Bryan Kian Hsiang Low

We show that for any two players, under some regularity conditions, their difference in Shapley value converges in probability to the difference in Shapley value of a limiting game whose characteristic function is proportional to the log-determinant of the joint Fisher information.

Bayesian Inference

Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization

no code implementations10 May 2022 Shouri Hu, Haowei Wang, Zhongxiang Dai, Bryan Kian Hsiang Low, Szu Hui Ng

To adapt the EI for better performance under cumulative regret, we introduce a novel quantity called the evaluation cost which is compared against the acquisition function, and with this, develop the expected improvement-cost (EIC) algorithm.

Bayesian Optimization

Markov Chain Monte Carlo-Based Machine Unlearning: Unlearning What Needs to be Forgotten

no code implementations28 Feb 2022 Quoc Phong Nguyen, Ryutaro Oikawa, Dinil Mon Divakaran, Mun Choon Chan, Bryan Kian Hsiang Low

Similarly, MCU can be used to erase the lineage of a user's personal data from trained ML models, thus upholding a user's "right to be forgotten".

Machine Unlearning

Rectified Max-Value Entropy Search for Bayesian Optimization

no code implementations28 Feb 2022 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

Although the existing max-value entropy search (MES) is based on the widely celebrated notion of mutual information, its empirical performance can suffer due to two misconceptions whose implications on the exploration-exploitation trade-off are investigated in this paper.

Bayesian Optimization Misconceptions

Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search

1 code implementation24 Jan 2022 Yao Shu, Zhongxiang Dai, Zhaoxuan Wu, Bryan Kian Hsiang Low

As a consequence, (a) the relationships among these metrics are unclear, (b) there is no theoretical interpretation for their empirical performances, and (c) there may exist untapped potential in existing training-free NAS, which probably can be unveiled through a unified theoretical understanding.

Neural Architecture Search

Incentivizing Collaboration in Machine Learning via Synthetic Data Rewards

1 code implementation17 Dec 2021 Sebastian Shenghong Tay, Xinyi Xu, Chuan Sheng Foo, Bryan Kian Hsiang Low

This paper presents a novel collaborative generative modeling (CGM) framework that incentivizes collaboration among self-interested parties to contribute data to a pool for training a generative model (e. g., GAN), from which synthetic data are drawn and distributed to the parties as rewards commensurate to their contributions.

BIG-bench Machine Learning Data Valuation +1

Optimizing Conditional Value-At-Risk of Black-Box Functions

1 code implementation NeurIPS 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents two Bayesian optimization (BO) algorithms with theoretical performance guarantee to maximize the conditional value-at-risk (CVaR) of a black-box function: CV-UCB and CV-TS which are based on the well-established principle of optimism in the face of uncertainty and Thompson sampling, respectively.

Bayesian Optimization Thompson Sampling

Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

no code implementations NeurIPS 2021 Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, Chuan Sheng Foo, Bryan Kian Hsiang Low

In this paper, we adopt federated learning as a gradient-based formalization of collaborative machine learning, propose a novel cosine gradient Shapley value to evaluate the agents’ uploaded model parameter updates/gradients, and design theoretically guaranteed fair rewards in the form of better model performance.

BIG-bench Machine Learning Fairness +1

Validation Free and Replication Robust Volume-based Data Valuation

no code implementations NeurIPS 2021 Xinyi Xu, Zhaoxuan Wu, Chuan Sheng Foo, Bryan Kian Hsiang Low

We observe that the diversity of the data points is an inherent property of the dataset that is independent of validation.

Data Valuation

Differentially Private Federated Bayesian Optimization with Distributed Exploration

no code implementations NeurIPS 2021 Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

The resulting differentially private FTS with DE (DP-FTS-DE) algorithm is endowed with theoretical guarantees for both the privacy and utility and is amenable to interesting theoretical insights about the privacy-utility trade-off.

Bayesian Optimization Federated Learning +1

Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee

2 code implementations NeurIPS 2021 Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan, Bryan Kian Hsiang Low

The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories.

Decision Making Federated Learning +2

Neural Ensemble Search via Bayesian Sampling

no code implementations6 Sep 2021 Yao Shu, Yizhou Chen, Zhongxiang Dai, Bryan Kian Hsiang Low

Unfortunately, these NAS algorithms aim to select only one single well-performing architecture from their search spaces and thus have overlooked the capability of neural network ensemble (i. e., an ensemble of neural networks with diverse architectures) in achieving improved performance over a single final selected architecture.

Adversarial Defense Neural Architecture Search

Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization

1 code implementation30 Jul 2021 Quoc Phong Nguyen, Zhaoxuan Wu, Bryan Kian Hsiang Low, Patrick Jaillet

Information-based Bayesian optimization (BO) algorithms have achieved state-of-the-art performance in optimizing a black-box objective function.

Bayesian Optimization Face Recognition

Value-at-Risk Optimization with Gaussian Processes

no code implementations13 May 2021 Quoc Phong Nguyen, Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

Value-at-risk (VaR) is an established measure to assess risks in critical real-world applications with random environmental factors.

Gaussian Processes Portfolio Optimization

Convolutional Normalizing Flows for Deep Gaussian Processes

no code implementations17 Apr 2021 Haibin Yu, Dapeng Liu, Yizhou Chen, Bryan Kian Hsiang Low, Patrick Jaillet

Deep Gaussian processes (DGPs), a hierarchical composition of GP models, have successfully boosted the expressive power of their single-layer counterpart.

Gaussian Processes Variational Inference

Meta-Learning with Implicit Processes

no code implementations1 Jan 2021 Yizhou Chen, Dong Li, Na Li, TONG LIANG, Shizhuo Zhang, Bryan Kian Hsiang Low

This paper presents a novel implicit process-based meta-learning (IPML) algorithm that, in contrast to existing works, explicitly represents each task as a continuous latent vector and models its probabilistic belief within the highly expressive IP framework.

Meta-Learning

Balancing training time vs. performance with Bayesian Early Pruning

no code implementations1 Jan 2021 Mohit Rajpal, Yehong Zhang, Bryan Kian Hsiang Low

Pruning is an approach to alleviate overparameterization of deep neural networks (DNN) by zeroing out or pruning DNN elements with little to no efficacy at a given task.

Computational Efficiency

Top-$k$ Ranking Bayesian Optimization

1 code implementation19 Dec 2020 Quoc Phong Nguyen, Sebastian Tay, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents a novel approach to top-$k$ ranking Bayesian optimization (top-$k$ ranking BO) which is a practical and significant generalization of preferential BO to handle top-$k$ ranking and tie/indifference observations.

Bayesian Optimization

An Information-Theoretic Framework for Unifying Active Learning Problems

1 code implementation19 Dec 2020 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

This paper presents an information-theoretic framework for unifying active learning problems: level set estimation (LSE), Bayesian optimization (BO), and their generalized variant.

Active Learning Bayesian Optimization

Variational Bayesian Unlearning

no code implementations NeurIPS 2020 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

We frame this problem as one of minimizing the Kullback-Leibler divergence between the approximate posterior belief of model parameters after directly unlearning from erased data vs. the exact posterior belief from retraining with remaining data.

Variational Inference

Private Outsourced Bayesian Optimization

no code implementations24 Oct 2020 Dmitrii Kharkovskii, Zhongxiang Dai, Bryan Kian Hsiang Low

This paper presents the private-outsourced-Gaussian process-upper confidence bound (PO-GP-UCB) algorithm, which is the first algorithm for privacy-preserving Bayesian optimization (BO) in the outsourced setting with a provable performance guarantee.

Bayesian Optimization Privacy Preserving

A Unifying Framework of Bilinear LSTMs

no code implementations23 Oct 2019 Mohit Rajpal, Bryan Kian Hsiang Low

This paper presents a novel unifying framework of bilinear LSTMs that can represent and utilize the nonlinear interaction of the input features present in sequence datasets for achieving superior performance over a linear LSTM and yet not incur more parameters to be learned.

Distributed Batch Gaussian Process Optimization

no code implementations ICML 2017 Erik A. Daxberger, Bryan Kian Hsiang Low

To realize this, we generalize GP-UCB to a new batch variant amenable to a Markov approximation, which can then be naturally formulated as a multi-agent distributed constraint optimization problem in order to fully exploit the efficiency of its state-of-the-art solvers for achieving linear time in the batch size.

Bayesian Optimization

Inverse Reinforcement Learning with Locally Consistent Reward Functions

no code implementations NeurIPS 2015 Quoc Phong Nguyen, Bryan Kian Hsiang Low, Patrick Jaillet

By representing our IRL problem with a probabilistic graphical model, an expectation-maximization (EM) algorithm can be devised to iteratively learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert’s demonstrated trajectories.

Clustering reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.