1 code implementation • 3 Nov 2023 • Aditya Desai, Benjamin Meisburger, Zichang Liu, Anshumali Shrivastava
To include data from all devices in federated learning, we must enable collective training of embedding tables on devices with heterogeneous memory capacities.
no code implementations • 17 Oct 2023 • Aditya Desai, Anshumali Shrivastava
In this paper, we comprehensively assess the trade-off between memory and accuracy across RPS, pruning techniques, and building smaller models.
no code implementations • 21 Jul 2022 • Aditya Desai, Anshumali Shrivastava
It contains 100GB of embedding memory (25+Billion parameters).
1 code implementation • 21 Jul 2022 • Aditya Desai, Keren Zhou, Anshumali Shrivastava
Advancements in deep learning are often associated with increasing model sizes.
no code implementations • NeurIPS 2021 • Aditya Desai, Zhaozhuo Xu, Menal Gupta, Anu Chandran, Antoine Vial-Aussavy, Anshumali Shrivastava
This paradigm breaks the SI into local inversion tasks, which predicts each small chunk of subsurface properties using surrounding seismic data.
no code implementations • 29 Sep 2021 • Aditya Desai, Shashank Sonkar, Anshumali Shrivastava, Richard Baraniuk
Grounded on this framework, we show that many algorithms ranging across different domains are, in fact, searching for continuous stable coloring solutions of an underlying graph corresponding to the domain.
no code implementations • 4 Aug 2021 • Aditya Desai, Li Chou, Anshumali Shrivastava
In this paper, we present Random Offset Block Embedding Array (ROBE) as a low memory alternative to embedding tables which provide orders of magnitude reduction in memory usage while maintaining accuracy and boosting execution speed.
no code implementations • 26 Feb 2021 • Zhaozhuo Xu, Aditya Desai, Menal Gupta, Anu Chandran, Antoine Vial-Aussavy, Anshumali Shrivastava
We propose a fundamental shift to move away from convolutions and introduce SESDI: Set Embedding based SDI approach.
1 code implementation • 24 Feb 2021 • Aditya Desai, Yanzhou Pan, Kuangyuan Sun, Li Chou, Anshumali Shrivastava
In particular, our LMA embeddings achieve the same performance compared to standard embeddings with a 16$\times$ reduction in memory footprint.
no code implementations • 24 Feb 2021 • Aditya Desai, Benjamin Coleman, Anshumali Shrivastava
We introduce Density sketches (DS): a succinct online summary of the data distribution.