Search Results for author: Hanzhang Hu

Found 6 papers, 2 papers with code

Efficient Forward Architecture Search

2 code implementations NeurIPS 2019 Hanzhang Hu, John Langford, Rich Caruana, Saurajit Mukherjee, Eric Horvitz, Debadeepta Dey

We propose a neural architecture search (NAS) algorithm, Petridish, to iteratively add shortcut connections to existing network layers.

feature selection Neural Architecture Search +1

Log-DenseNet: How to Sparsify a DenseNet

1 code implementation ICLR 2018 Hanzhang Hu, Debadeepta Dey, Allison Del Giorno, Martial Hebert, J. Andrew Bagnell

Skip connections are increasingly utilized by deep neural networks to improve accuracy and cost-efficiency.

Semantic Segmentation

Learning Anytime Predictions in Neural Networks via Adaptive Loss Balancing

no code implementations22 Aug 2017 Hanzhang Hu, Debadeepta Dey, Martial Hebert, J. Andrew Bagnell

Experimentally, the adaptive weights induce more competitive anytime predictions on multiple recognition data-sets and models than non-adaptive approaches including weighing all losses equally.

Gradient Boosting on Stochastic Data Streams

no code implementations1 Mar 2017 Hanzhang Hu, Wen Sun, Arun Venkatraman, Martial Hebert, J. Andrew Bagnell

To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners.

Efficient Feature Group Sequencing for Anytime Linear Prediction

no code implementations19 Sep 2014 Hanzhang Hu, Alexander Grubb, J. Andrew Bagnell, Martial Hebert

We theoretically guarantee that our algorithms achieve near-optimal linear predictions at each budget when a feature group is chosen.

Cannot find the paper you are looking for? You can Submit a new open access paper.