Search Results for author: Avishek Ghosh

Found 21 papers, 2 papers with code

Optimal Compression of Unit Norm Vectors in the High Distortion Regime

no code implementations16 Jul 2023 Heng Zhu, Avishek Ghosh, Arya Mazumdar

We approach this problem in a worst-case scenario, without any prior information on the vector, but allowing for the use of randomized compression maps.

An Improved Algorithm for Clustered Federated Learning

1 code implementation20 Oct 2022 Harshvardhan, Avishek Ghosh, Arya Mazumdar

\texttt{SR-FCA} treats each user as a singleton cluster as an initialization, and then successively refine the cluster estimation via exploiting similar users belonging to the same cluster.

Clustering Federated Learning

Exploration in Linear Bandits with Rich Action Sets and its Implications for Inference

no code implementations23 Jul 2022 Debangshu Banerjee, Avishek Ghosh, Sayak Ray Chowdhury, Aditya Gopalan

Furthermore, while the previous result is shown to hold only in the asymptotic regime (as $n \to \infty$), our result for these "locally rich" action spaces is any-time.

Clustering Model Selection

Model Selection in Reinforcement Learning with General Function Approximations

no code implementations6 Jul 2022 Avishek Ghosh, Sayak Ray Chowdhury

We consider model selection for classic Reinforcement Learning (RL) environments -- Multi Armed Bandits (MABs) and Markov Decision Processes (MDPs) -- under general function approximations.

Model Selection Multi-Armed Bandits +2

Decentralized Competing Bandits in Non-Stationary Matching Markets

no code implementations31 May 2022 Avishek Ghosh, Abishek Sankararaman, Kannan Ramchandran, Tara Javidi, Arya Mazumdar

We propose and analyze a decentralized and asynchronous learning algorithm, namely Decentralized Non-stationary Competing Bandits (\texttt{DNCB}), where the agents play (restrictive) successive elimination type learning algorithms to learn their preference over the arms.

On Learning Mixture of Linear Regressions in the Non-Realizable Setting

no code implementations26 May 2022 Avishek Ghosh, Arya Mazumdar, Soumyabrata Pal, Rajat Sen

In this paper we show that a version of the popular alternating minimization (AM) algorithm finds the best fit lines in a dataset even when a realizable model is not assumed, under some regularity conditions on the dataset and the initial points, and thereby provides a solution for the ERM.

Breaking the $\sqrt{T}$ Barrier: Instance-Independent Logarithmic Regret in Stochastic Contextual Linear Bandits

no code implementations19 May 2022 Avishek Ghosh, Abishek Sankararaman

The (poly) logarithmic regret of \texttt{LR-SCB} stems from two crucial facts: (a) the application of a norm adaptive algorithm to exploit the parameter estimation and (b) an analysis of the shifted linear contextual bandit algorithm, showing that shifting results in increasing regret.

Multi-Armed Bandits

Model Selection for Generic Reinforcement Learning

no code implementations13 Jul 2021 Avishek Ghosh, Sayak Ray Chowdhury, Kannan Ramchandran

We propose and analyze a novel algorithm, namely \emph{Adaptive Reinforcement Learning (General)} (\texttt{ARL-GEN}) that adapts to the smallest such family where the true transition kernel $P^*$ lies.

Model Selection reinforcement-learning +1

Model Selection for Generic Contextual Bandits

no code implementations7 Jul 2021 Avishek Ghosh, Abishek Sankararaman, Kannan Ramchandran

We consider the problem of model selection for the general stochastic contextual bandits under the realizability assumption.

Model Selection Multi-Armed Bandits

Adaptive Clustering and Personalization in Multi-Agent Stochastic Linear Bandits

no code implementations15 Jun 2021 Avishek Ghosh, Abishek Sankararaman, Kannan Ramchandran

We show that, for any agent, the regret scales as $\mathcal{O}(\sqrt{T/N})$, if the agent is in a `well separated' cluster, or scales as $\mathcal{O}(T^{\frac{1}{2} + \varepsilon}/(N)^{\frac{1}{2} -\varepsilon})$ if its cluster is not well separated, where $\varepsilon$ is positive and arbitrarily close to $0$.

Clustering

LocalNewton: Reducing Communication Bottleneck for Distributed Learning

no code implementations16 May 2021 Vipul Gupta, Avishek Ghosh, Michal Derezinski, Rajiv Khanna, Kannan Ramchandran, Michael Mahoney

To enhance practicability, we devise an adaptive scheme to choose L, and we show that this reduces the number of local iterations in worker machines between two model synchronizations as the training proceeds, successively refining the model quality at the master.

Distributed Optimization

Escaping Saddle Points in Distributed Newton's Method with Communication Efficiency and Byzantine Resilience

no code implementations17 Mar 2021 Avishek Ghosh, Raj Kumar Maity, Arya Mazumdar, Kannan Ramchandran

Moreover, we validate our theoretical findings with experiments using standard datasets and several types of Byzantine attacks, and obtain an improvement of $25\%$ with respect to first order methods in iteration complexity.

Federated Learning

Distributed Newton Can Communicate Less and Resist Byzantine Workers

no code implementations NeurIPS 2020 Avishek Ghosh, Raj Kumar Maity, Arya Mazumdar

We develop a distributed second order optimization algorithm that is communication-efficient as well as robust against Byzantine failures of the worker machines.

Distributed Optimization

Alternating Minimization Converges Super-Linearly for Mixed Linear Regression

no code implementations23 Apr 2020 Avishek Ghosh, Kannan Ramchandran

Furthermore, we compare AM with a gradient based heuristic algorithm empirically and show that AM dominates in iteration complexity as well as wall-clock time.

regression

Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback

no code implementations21 Nov 2019 Avishek Ghosh, Raj Kumar Maity, Swanand Kadhe, Arya Mazumdar, Kannan Ramchandran

Moreover, we analyze the compressed gradient descent algorithm with error feedback (proposed in \cite{errorfeed}) in a distributed setting and in the presence of Byzantine worker machines.

Max-Affine Regression: Provable, Tractable, and Near-Optimal Statistical Estimation

no code implementations21 Jun 2019 Avishek Ghosh, Ashwin Pananjady, Adityanand Guntuboyina, Kannan Ramchandran

Max-affine regression refers to a model where the unknown regression function is modeled as a maximum of $k$ unknown affine functions for a fixed $k \geq 1$.

regression Retrieval

Robust Federated Learning in a Heterogeneous Environment

no code implementations16 Jun 2019 Avishek Ghosh, Justin Hong, Dong Yin, Kannan Ramchandran

Then, leveraging the statistical model, we solve the robust heterogeneous Federated Learning problem \emph{optimally}; in particular our algorithm matches the lower bound on the estimation error in dimension and the number of data points.

Clustering Federated Learning

Online Scoring with Delayed Information: A Convex Optimization Viewpoint

no code implementations9 Jul 2018 Avishek Ghosh, Kannan Ramchandran

We argue that the error in the score estimate accumulated over $T$ iterations is small if the regret of the online convex game is small.

Misspecified Linear Bandits

no code implementations23 Apr 2017 Avishek Ghosh, Sayak Ray Chowdhury, Aditya Gopalan

Regret guarantees for state-of-the-art linear bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit (OFUL) hold under the assumption that the arms expected rewards are perfectly linear in their features.

Learning-To-Rank

Cannot find the paper you are looking for? You can Submit a new open access paper.