no code implementations • 21 Feb 2024 • Lin An, Andrew A. Li, Benjamin Moseley, Gabriel Visotsky
We take the shadow price of each resource as prediction, which can be obtained by predictions on future requests.
1 code implementation • 25 May 2023 • Sungjin Im, Benjamin Moseley, Chenyang Xu, Ruilong Zhang
This elegant model studies the trade-off between acknowledgement cost and waiting experienced by requests.
1 code implementation • 22 Oct 2022 • Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
1 code implementation • 10 Dec 2021 • Chenyang Xu, Benjamin Moseley
Steiner tree is known to have strong lower bounds in the online setting and any algorithm's worst-case guarantee is far from desirable.
no code implementations • NeurIPS 2021 • Silvio Lattanzi, Benjamin Moseley, Sergei Vassilvitskii, Yuyan Wang, Rudy Zhou
In correlation clustering we are given a set of points along with recommendations whether each pair of points should be placed in the same cluster or into separate clusters.
no code implementations • NeurIPS 2021 • Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, Sergei Vassilvitskii
Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution.
no code implementations • 23 Nov 2020 • Thomas Lavastida, Benjamin Moseley, R. Ravi, Chenyang Xu
Instance robustness ensures that the prediction is robust to modest changes in the problem input, where the measure of the change may be problem specific.
1 code implementation • 30 Aug 2020 • Benjamin Moseley, Yuyan Wang
The paper builds a theoretical connection between this objective and the bisecting k-means algorithm.
no code implementations • 1 Aug 2020 • Benjamin Moseley, Kirk Pruhs, Alireza Samadian, Yuyan Wang
Few relational algorithms are known and this paper offers techniques for designing relational algorithms as well as characterizing their limitations.
no code implementations • NeurIPS 2020 • Sara Ahmadian, Alessandro Epasto, Marina Knittel, Ravi Kumar, Mohammad Mahdian, Benjamin Moseley, Philip Pham, Sergei Vassilvitskii, Yuyan Wang
As machine learning has become more prevalent, researchers have begun to recognize the necessity of ensuring machine learning systems are fair.
no code implementations • 11 May 2020 • Mahmoud Abo-Khamis, Sungjin Im, Benjamin Moseley, Kirk Pruhs, Alireza Samadian
We consider gradient descent like algorithms for Support Vector Machine (SVM) training when the data is in relational form.
no code implementations • 24 Mar 2020 • Mahmoud Abo-Khamis, Sungjin Im, Benjamin Moseley, Kirk Pruhs, Alireza Samadian
In contrast, we show that the situation with two additive inequalities is quite different, by showing that it is NP-hard to evaluate simple aggregation queries, with two additive inequalities, with any bounded relative error.
1 code implementation • NeurIPS 2019 • Shali Jiang, Roman Garnett, Benjamin Moseley
We study a special paradigm of active learning, called cost effective active search, where the goal is to find a given number of positive points from a large unlabeled pool with minimum labeling cost.
1 code implementation • ICLR 2019 • Ayan Chakrabarti, Benjamin Moseley
Training convolutional neural network models is memory intensive since back-propagation requires storing activations of all intermediate layers.
no code implementations • 22 Dec 2018 • Mahmoud Abo Khamis, Ryan R. Curtin, Benjamin Moseley, Hung Q. Ngo, XuanLong Nguyen, Dan Olteanu, Maximilian Schleich
This new width is sandwiched between the submodular and the fractional hypertree widths.
no code implementations • NeurIPS 2018 • Shali Jiang, Gustavo Malkomes, Matthew Abbott, Benjamin Moseley, Roman Garnett
A critical target scenario is high-throughput screening for scientific discovery, such as drug or materials discovery.
no code implementations • 21 Nov 2018 • Shali Jiang, Gustavo Malkomes, Benjamin Moseley, Roman Garnett
We also study the batch setting for the first time, where a batch of $b>1$ points can be queried at each iteration.
1 code implementation • 7 Oct 2018 • Bryce Bagley, Blake Bordelon, Benjamin Moseley, Ralf Wessel
Learning synaptic weights of spiking neural network (SNN) models that can reproduce target spike trains from provided neural firing data is a central problem in computational neuroscience and spike-based computing.
no code implementations • NeurIPS 2017 • Benjamin Moseley, Joshua Wang
Hierarchical clustering is a data analysis method that has been used for decades.
no code implementations • ICML 2017 • Shali Jiang, Gustavo Malkomes, Geoff Converse, Alyssa Shofner, Benjamin Moseley, Roman Garnett
Active search is an active learning setting with the goal of identifying as many members of a given class as possible under a labeling budget.
no code implementations • NeurIPS 2015 • Gustavo Malkomes, Matt J. Kusner, Wenlin Chen, Kilian Q. Weinberger, Benjamin Moseley
Clustering large data is a fundamental problem with a vast number of applications.
no code implementations • 22 Apr 2013 • Arpita Ghosh, Satyen Kale, Kevin Lang, Benjamin Moseley
We study trade networks with a tree structure, where a seller with a single indivisible good is connected to buyers, each with some value for the good, via a unique path of intermediaries.
3 code implementations • 29 Mar 2012 • Bahman Bahmani, Benjamin Moseley, Andrea Vattani, Ravi Kumar, Sergei Vassilvitskii
The recently proposed k-means++ initialization algorithm achieves this, obtaining an initial set of centers that is provably close to the optimum solution.
Databases