no code implementations • 3 Mar 2024 • Joon Suk Huh, Ellen Vitercik, Kirthevasan Kandasamy
Specifically, we aim to maximize profit over an arbitrary sequence of multiple demand curves, each dependent on a distinct ancillary variable, but sharing the same price.
no code implementations • 13 Apr 2023 • Ting Cai, Kirthevasan Kandasamy
When the labeling cost is $B$, our algorithm, which chooses to label a point if the uncertainty is larger than a time and cost dependent threshold, achieves a worst-case upper bound of $\widetilde{O}(B^{\frac{1}{3}} K^{\frac{1}{3}} T^{\frac{2}{3}})$ on the loss after $T$ rounds.
no code implementations • 20 Feb 2023 • Wenshuo Guo, Nika Haghtalab, Kirthevasan Kandasamy, Ellen Vitercik
Customers with few relevant reviews may hesitate to make a purchase except at a low price, so for the seller, there is a tension between setting high prices and ensuring that there are enough reviews so that buyers can confidently estimate their values.
no code implementations • 11 Jun 2021 • Wenshuo Guo, Kirthevasan Kandasamy, Joseph E Gonzalez, Michael I. Jordan, Ion Stoica
The allocations at a CE are Pareto efficient and fair.
no code implementations • 6 Jun 2021 • Brijen Thananjeyan, Kirthevasan Kandasamy, Ion Stoica, Michael I. Jordan, Ken Goldberg, Joseph E. Gonzalez
In this work, the decision-maker is given a deadline of $T$ rounds, where, on each round, it can adaptively choose which arms to pull and how many times to pull them; this distinguishes the number of decisions made (i. e., time or number of rounds) from the number of samples acquired (cost).
no code implementations • 15 Dec 2020 • Kirthevasan Kandasamy, Gur-Eyal Sela, Joseph E Gonzalez, Michael I Jordan, Ion Stoica
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof, but when users do not know their resource requirements.
no code implementations • 31 Oct 2020 • Brijen Thananjeyan, Kirthevasan Kandasamy, Ion Stoica, Michael I. Jordan, Ken Goldberg, Joseph E. Gonzalez
Second, we present an algorithm for a fixed deadline setting, where we are given a time deadline and need to maximize the probability of finding the best arm.
no code implementations • 19 Apr 2020 • Kirthevasan Kandasamy, Joseph E. Gonzalez, Michael. I. Jordan, Ion Stoica
To that end, we first define three notions of regret for the welfare, the individual utilities of each agent and that of the mechanism.
no code implementations • 6 Jan 2020 • Youngseog Chung, Ian Char, Willie Neiswanger, Kirthevasan Kandasamy, Andrew Oakleigh Nelson, Mark D Boyer, Egemen Kolemen, Jeff Schneider
One obstacle in utilizing fusion as a feasible energy source is the stability of the reaction.
1 code implementation • NeurIPS 2019 • Ian Char, Youngseog Chung, Willie Neiswanger, Kirthevasan Kandasamy, Oak Nelson, Mark Boyer, Egemen Kolemen
In black-box optimization, an agent repeatedly chooses a configuration to test, so as to find an optimal configuration.
no code implementations • 22 Oct 2019 • Adarsh Dave, Jared Mitchell, Kirthevasan Kandasamy, Sven Burke, Biswajit Paria, Barnabas Poczos, Jay Whitacre, Venkatasubramanian Viswanathan
Innovations in batteries take years to formulate and commercialize, requiring extensive experimentation during the design and optimization phases.
1 code implementation • 5 Aug 2019 • Ksenia Korovina, Sailun Xu, Kirthevasan Kandasamy, Willie Neiswanger, Barnabas Poczos, Jeff Schneider, Eric P. Xing
In applications such as molecule design or drug discovery, it is desirable to have an algorithm which recommends new candidate molecules based on the results of past tests.
1 code implementation • 15 Mar 2019 • Kirthevasan Kandasamy, Karun Raju Vysyaraju, Willie Neiswanger, Biswajit Paria, Christopher R. Collins, Jeff Schneider, Barnabas Poczos, Eric P. Xing
We compare Dragonfly to a suite of other packages and algorithms for global optimisation and demonstrate that when the above methods are integrated, they enable significant improvements in the performance of BO.
1 code implementation • 31 Jan 2019 • Willie Neiswanger, Kirthevasan Kandasamy, Barnabas Poczos, Jeff Schneider, Eric Xing
Optimizing an expensive-to-query function is a common task in science and engineering, where it is beneficial to keep the number of queries to a minimum.
1 code implementation • 24 Oct 2018 • Rajat Sen, Kirthevasan Kandasamy, Sanjay Shakkottai
We study the problem of black-box optimization of a noisy function in the presence of low-cost approximations or fidelities, which is motivated by problems like hyper-parameter tuning.
no code implementations • ICML 2018 • Rajat Sen, Kirthevasan Kandasamy, Sanjay Shakkottai
Motivated by settings such as hyper-parameter tuning and physical simulations, we consider the problem of black-box optimization of a function.
no code implementations • 30 May 2018 • Biswajit Paria, Kirthevasan Kandasamy, Barnabás Póczos
We also study a notion of regret in the multi-objective setting and show that our strategy achieves sublinear regret.
1 code implementation • 25 May 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Reed Zhang, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design a new myopic strategy for a wide class of sequential design of experiment (DOE) problems, where the goal is to collect data in order to to fulfil a certain problem specific goal.
1 code implementation • NeurIPS 2018 • Kirthevasan Kandasamy, Willie Neiswanger, Jeff Schneider, Barnabas Poczos, Eric Xing
A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model.
1 code implementation • 25 May 2017 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Jeff Schneider, Barnabas Poczos
We design and analyse variations of the classical Thompson sampling (TS) procedure for Bayesian optimisation (BO) in settings where function evaluations are expensive, but can be performed in parallel.
no code implementations • ICML 2017 • Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabas Poczos
Bandit methods for black-box optimisation, such as Bayesian optimisation, are used in a variety of applications including hyper-parameter tuning and experiment design.
no code implementations • 10 Feb 2017 • Kirthevasan Kandasamy, Yoram Bachrach, Ryota Tomioka, Daniel Tarlow, David Carter
We study reinforcement learning of chatbots with recurrent neural network architectures when the rewards are noisy and expensive to obtain.
no code implementations • 3 Feb 2017 • Kirthevasan Kandasamy, Jeff Schneider, Barnabás Póczos
In this paper, we study active posterior estimation in a Bayesian setting when the likelihood is expensive to evaluate.
1 code implementation • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $\func$ may be obtainable.
no code implementations • NeurIPS 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Jeff Schneider, Barnabás Póczos
We study a variant of the classical stochastic $K$-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available.
1 code implementation • NeurIPS 2016 • Kirthevasan Kandasamy, Maruan Al-Shedivat, Eric P. Xing
Recently, there has been a surge of interest in using spectral methods for estimating latent variable models.
1 code implementation • 20 Mar 2016 • Kirthevasan Kandasamy, Gautam Dasarathy, Junier B. Oliva, Jeff Schneider, Barnabas Poczos
However, in many cases, cheap approximations to $f$ may be obtainable.
2 code implementations • 31 Jan 2016 • Kirthevasan Kandasamy, Yao-Liang Yu
Between non-additive models which often have large variance and first order additive models which have large bias, there has been little work to exploit the trade-off in the middle via additive models of intermediate order.
no code implementations • NeurIPS 2015 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins
We propose and analyse estimators for statistical functionals of one or moredistributions under nonparametric assumptions. Our estimators are derived from the von Mises expansion andare based on the theory of influence functions, which appearin the semiparametric statistics literature. We show that estimators based either on data-splitting or a leave-one-out techniqueenjoy fast rates of convergence and other favorable theoretical properties. We apply this framework to derive estimators for several popular informationtheoretic quantities, and via empirical evaluation, show the advantage of thisapproach over existing estimators.
no code implementations • 5 Mar 2015 • Kirthevasan Kandasamy, Jeff Schneider, Barnabas Poczos
We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions.
2 code implementations • 17 Nov 2014 • Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, James M. Robins
We propose and analyze estimators for statistical functionals of one or more distributions under nonparametric assumptions.
no code implementations • 30 Oct 2014 • Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman
We give a comprehensive theoretical characterization of a nonparametric estimator for the $L_2^2$ divergence between two continuous distributions.
no code implementations • 12 Feb 2014 • Akshay Krishnamurthy, Kirthevasan Kandasamy, Barnabas Poczos, Larry Wasserman
We consider nonparametric estimation of $L_2$, Renyi-$\alpha$ and Tsallis-$\alpha$ divergences between continuous distributions.