no code implementations • 16 Oct 2023 • Ramya Burra, Anshoo Tandon, Srishti Mittal
These optimizations enable MNIST dataset inference in just 32 seconds with only 0. 2 GB of RAM for a 5-layer neural network.
1 code implementation • 27 Oct 2021 • Fengzhuo Zhang, Anshoo Tandon, Vincent Y. F. Tan
We design and analyze an algorithm Active Learning Algorithm for Trees with Homogeneous Edge (Active-LATHE), which surprisingly boosts the error exponent by at least 40\% when $\rho$ is at least $0. 8$.
no code implementations • 22 Jan 2021 • Anshoo Tandon, Aldric H. J. Yuan, Vincent Y. F. Tan
We provide error exponent analyses and extensive numerical results on a variety of trees to show that the sample complexity of SGA is significantly better than the algorithm of Katiyar et al. (2020).
no code implementations • 9 May 2020 • Anshoo Tandon, Vincent Y. F. Tan, Shiyao Zhu
In this case, we show that they strictly improve on the recent results of Nikolakakis, Kalogerias, and Sarwate [Proc.