no code implementations • 2 Nov 2017 • Mayank Kabra, Kristin Branson
We give a covering number bound for deep learning networks that is independent of the size of the network.
no code implementations • CVPR 2015 • Mayank Kabra, Alice Robie, Kristin Branson
As computing the influence of each training example is computationally impractical, we propose a novel distance metric to approximate influence for boosting classifiers that is fast enough to be used interactively.
no code implementations • NeurIPS 2007 • Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, Nakul Verma
We present a simple variant of the k-d tree which automatically adapts to intrinsic low dimensional structure in data.