no code implementations • 7 Nov 2023 • Philip Andrew Mansfield, Arash Afkanpour, Warren Richard Morningstar, Karan Singhal
In this work, we propose a new family of local transformations based on Gaussian random fields to generate image augmentations for self-supervised representation learning.
no code implementations • 11 Sep 2023 • Pengfei Guo, Warren Richard Morningstar, Raviteja Vemulapalli, Karan Singhal, Vishal M. Patel, Philip Andrew Mansfield
To mitigate this issue and facilitate training of large models on edge devices, we introduce a simple yet effective strategy, Federated Layer-wise Learning, to simultaneously reduce per-client memory, computation, and communication costs.
no code implementations • 23 May 2023 • Elahe Vedadi, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, Warren Richard Morningstar
We then approximate this process using Variational Inference to train our model efficiently.
no code implementations • 30 Sep 2022 • Raviteja Vemulapalli, Warren Richard Morningstar, Philip Andrew Mansfield, Hubert Eichner, Karan Singhal, Arash Afkanpour, Bradley Green
In this work, we focus on federated training of dual encoding models on decentralized data composed of many small, non-IID (independent and identically distributed) client datasets.