2 code implementations • NeurIPS 2019 • Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, Debadeepta Dey
We propose a neural architecture search (NAS) algorithm, Petridish, to iteratively add shortcut connections to existing network layers.
no code implementations • ICLR 2018 • Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell
We present an approach for anytime predictions in deep neural networks (DNNs).
1 code implementation • ICLR 2018 • Hanzhang Hu, Debadeepta Dey, Allison Del Giorno, Martial Hebert, J. Andrew Bagnell
Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency.
no code implementations • 22 Aug 2017 • Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell
Experimentally, the adaptive weights induce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally.
no code implementations • 1 Mar 2017 • Hanzhang Hu, Wen Sun, Arun Venkatraman, Martial Hebert, J. Andrew Bagnell
To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners.
no code implementations • 19 Sep 2014 • Hanzhang Hu, Alexander Grubb, J. Andrew Bagnell, Martial Hebert
We theoretically guarantee that our algorithms achieve near-optimal linear predictions at each budget when a feature group is chosen.