no code implementations • 2 Apr 2024 • Yu Xia, Xu Liu, Tong Yu, Sungchul Kim, Ryan A. Rossi, Anup Rao, Tung Mai, Shuai Li
Large Language Models (LLMs) have shown propensity to generate hallucinated outputs, i. e., texts that are factually incorrect or unsupported.
no code implementations • 8 Nov 2023 • Renzhi Wu, Saayan Mitra, Xiang Chen, Anup Rao
Therefore, we propose a new learning setting \textit{Decentralized Personalized Online Federated Learning} that considers all the three aspects at the same time.
no code implementations • 10 Jan 2023 • Anup Rao
Elastic Cash is a new decentralized mechanism for regulating the money supply.
1 code implementation • 12 Oct 2022 • Raghavendra Addanki, David Arbour, Tung Mai, Cameron Musco, Anup Rao
In particular, we study sample-constrained treatment effect estimation, where we must select a subset of $s \ll n$ individuals from the population to experiment on.
no code implementations • 28 Jan 2022 • Nikhil Sheoran, Subrata Mitra, Vibhor Porwal, Siddharth Ghetia, Jatin Varshney, Tung Mai, Anup Rao, Vikas Maddukuri
The goal of Approximate Query Processing (AQP) is to provide very fast but "accurate enough" results for costly aggregate queries thereby improving user experience in interactive exploration of large datasets.
no code implementations • 29 Nov 2021 • Aravind Reddy, Ryan A. Rossi, Zhao Song, Anup Rao, Tung Mai, Nedim Lipka, Gang Wu, Eunyee Koh, Nesreen Ahmed
In this paper, we introduce the online and streaming MAP inference and learning problems for Non-symmetric Determinantal Point Processes (NDPPs) where data points arrive in an arbitrary order and the algorithms are constrained to use a single-pass over the data as well as sub-linear memory.
no code implementations • 19 Sep 2021 • Sridhar Mahadevan, Anup Rao, Georgios Theocharous, Jennifer Healey
Many real-world applications require aligning two temporal sequences, including bioinformatics, handwriting recognition, activity recognition, and human-robot coordination.
no code implementations • NeurIPS 2021 • Tung Mai, Cameron N Musco, Anup Rao
It also does not depend on the specific loss function, so a single coreset can be used in multiple training scenarios.
no code implementations • 8 Mar 2021 • Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep Rangan, Alyson K. Fletcher
We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.
no code implementations • 25 Feb 2021 • Enayat Ullah, Tung Mai, Anup Rao, Ryan Rossi, Raman Arora
Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a (maximal) coupling of Markov chains for the noisy SGD procedure.
no code implementations • 4 Feb 2021 • David Arbour, Drew Dimmery, Tung Mai, Anup Rao
We study the online discrepancy minimization problem for vectors in $\mathbb{R}^d$ in the oblivious setting where an adversary is allowed fix the vectors $x_1, x_2, \ldots, x_n$ in arbitrary order ahead of time.
Data Structures and Algorithms Discrete Mathematics Combinatorics
no code implementations • 15 Jan 2021 • Mohammad Mehrabi, Adel Javanmard, Ryan A. Rossi, Anup Rao, Tung Mai
We study the tradeoff between standard risk and adversarial risk and derive the Pareto-optimal tradeoff, achievable over specific classes of models, in the infinite data limit with features dimension kept fixed.
no code implementations • 23 Oct 2020 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
Notably, since typed graphlet is more general than colored graphlet (and untyped graphlets), the counts of various typed graphlets can be combined to obtain the counts of the much simpler notion of colored graphlets.
1 code implementation • 21 Oct 2020 • David Arbour, Drew Dimmery, Anup Rao
In this work, we reframe the problem of balanced treatment assignment as optimization of a two-sample test between test and control units.
1 code implementation • 28 Sep 2020 • Jiong Zhu, Ryan A. Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K. Ahmed, Danai Koutra
Graph Neural Networks (GNNs) have proven to be useful for many different practical applications.
1 code implementation • 4 Jun 2020 • Tan Nguyen, Ali Shameli, Yasin Abbasi-Yadkori, Anup Rao, Branislav Kveton
We study sample complexity of optimizing "hill-climbing friendly" functions defined on a graph under noisy observations.
no code implementations • NeurIPS 2020 • Aldo Pacchiano, My Phan, Yasin Abbasi-Yadkori, Anup Rao, Julian Zimmert, Tor Lattimore, Csaba Szepesvari
Our methods rely on a novel and generic smoothing transformation for bandit algorithms that permits us to obtain optimal $O(\sqrt{T})$ model selection guarantees for stochastic contextual bandit problems as long as the optimal base algorithm satisfies a high probability regret guarantee.
no code implementations • 12 Jun 2019 • Ryan A. Rossi, Anup Rao, Sungchul Kim, Eunyee Koh, Nesreen K. Ahmed, Gang Wu
In this work, we investigate higher-order network motifs and develop techniques based on the notion of closing higher-order motifs that move beyond closing simple triangles.
no code implementations • 28 Jan 2019 • Ryan A. Rossi, Nesreen K. Ahmed, Aldo Carranza, David Arbour, Anup Rao, Sungchul Kim, Eunyee Koh
To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks.
1 code implementation • 11 Nov 2018 • Di Jin, Ryan Rossi, Danai Koutra, Eunyee Koh, Sungchul Kim, Anup Rao
Motivated by the computational and storage challenges that dense embeddings pose, we introduce the problem of latent network summarization that aims to learn a compact, latent representation of the graph structure with dimensionality that is independent of the input graph size (i. e., #nodes and #edges), while retaining the ability to derive node representations on the fly.
Social and Information Networks
no code implementations • 6 Oct 2018 • Aldo G. Carranza, Ryan A. Rossi, Anup Rao, Eunyee Koh
Using typed-graphlets as a basis, we develop a general principled framework for higher-order clustering in heterogeneous networks.
no code implementations • 12 Sep 2018 • John Boaz Lee, Ryan A. Rossi, Xiangnan Kong, Sungchul Kim, Eunyee Koh, Anup Rao
Experiments show that our proposed method is able to achieve state-of-the-art results on the semi-supervised node classification task.
no code implementations • 24 May 2018 • Sharan Vaswani, Branislav Kveton, Zheng Wen, Anup Rao, Mark Schmidt, Yasin Abbasi-Yadkori
We investigate the use of bootstrapping in the bandit setting.
no code implementations • 28 Jan 2018 • Ryan A. Rossi, Nesreen K. Ahmed, Eunyee Koh, Sungchul Kim, Anup Rao, Yasin Abbasi Yadkori
This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs.
no code implementations • 13 Dec 2017 • Branislav Kveton, Csaba Szepesvari, Anup Rao, Zheng Wen, Yasin Abbasi-Yadkori, S. Muthukrishnan
Many problems in computer vision and recommender systems involve low-rank matrices.
1 code implementation • NeurIPS 2015 • Rasmus Kyng, Anup Rao, Sushant Sachdeva
Given a directed acyclic graph $G,$ and a set of values $y$ on the vertices, the Isotonic Regression of $y$ is a vector $x$ that respects the partial order described by $G,$ and minimizes $\|x-y\|,$ for a specified norm.
1 code implementation • 2 Jul 2015 • Rasmus Kyng, Anup Rao, Sushant Sachdeva
Given a directed acyclic graph $G,$ and a set of values $y$ on the vertices, the Isotonic Regression of $y$ is a vector $x$ that respects the partial order described by $G,$ and minimizes $||x-y||,$ for a specified norm.
1 code implementation • 1 May 2015 • Rasmus Kyng, Anup Rao, Sushant Sachdeva, Daniel A. Spielman
We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices.