no code implementations • 25 Jun 2023 • Dheeraj Baby, Aniket Das, Dheeraj Nagaraj, Praneeth Netrapalli
Our work shows that we can estimate $\mathbf{w}^{*}$ in squared norm up to an error of $\tilde{O}\left(\|\mathbf{f}^{*}\|^2 \cdot \left(\frac{1}{n} + \left(\frac{d}{n}\right)^2\right)\right)$ and prove a matching lower bound (upto log factors).
no code implementations • 8 Jun 2022 • Aniket Das, Dheeraj Nagaraj, Anant Raj
We consider stochastic approximations of sampling algorithms, such as Stochastic Gradient Langevin Dynamics (SGLD) and the Random Batch Method (RBM) for Interacting Particle Dynamcs (IPD).
no code implementations • 7 Jun 2022 • Aniket Das, Bernhard Schölkopf, Michael Muehlebach
We obtain tight convergence rates for RR and SO and demonstrate that these strategies lead to faster convergence than uniform sampling.
no code implementations • 7 Nov 2021 • Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai
In this work, we propose a novel generative model that learns a flexible non-parametric prior over interpolation trajectories, conditioned on a pair of source and target images.
no code implementations • NeurIPS Workshop DLDE 2021 • Avinandan Bose, Aniket Das, Yatin Dandi, Piyush Rai
A range of applications require learning image generation models whose latent space effectively captures the high-level factors of variation in the data distribution, which can be judged by its ability to interpolate between images smoothly.
no code implementations • 17 Dec 2019 • Yatin Dandi, Aniket Das, Soumye Singhal, Vinay P. Namboodiri, Piyush Rai
The proposed model allows minor variations in content across frames while maintaining the temporal dependence through latent vectors encoding the pose or motion features.
1 code implementation • 8 Sep 2019 • Avik Pal, Aniket Das
The key features of TorchGAN are its extensibility, built-in support for a large number of popular models, losses and evaluation metrics, and zero overhead compared to vanilla PyTorch.