no code implementations • 21 Dec 2018 • Maria De-Arteaga, Amanda Coston, William Herlands
This is the Proceedings of NeurIPS 2018 Workshop on Machine Learning for the Developing World: Achieving Sustainable Impact, held in Montreal, Canada on December 8, 2018
no code implementations • 28 Oct 2018 • William Herlands, Daniel B. Neill, Hannes Nickisch, Andrew Gordon Wilson
We provide a model-agnostic formalization of change surfaces, illustrating how they can provide variable, heterogeneous, and non-monotonic rates of change across multiple dimensions.
no code implementations • 4 Apr 2018 • William Herlands, Edward McFowland III, Andrew Gordon Wilson, Daniel B. Neill
We introduce methods for identifying anomalous patterns in non-iid data by combining Gaussian processes with novel log-likelihood ratio statistic and subset scanning techniques.
no code implementations • 27 Nov 2017 • Andrew Gordon Wilson, Jason Yosinski, Patrice Simard, Rich Caruana, William Herlands
This is the Proceedings of NIPS 2017 Symposium on Interpretable Machine Learning, held in Long Beach, California, USA on December 7, 2017
no code implementations • 27 Nov 2017 • Maria De-Arteaga, William Herlands
This is the Proceedings of NIPS 2017 Workshop on Machine Learning for the Developing World, held in Long Beach, California, USA on December 8, 2017
no code implementations • 6 Oct 2017 • Daniel B. Neill, William Herlands
We describe two recently proposed machine learning approaches for discovering emerging trends in fatal accidental drug overdoses.
no code implementations • 28 Nov 2016 • Andrew Gordon Wilson, Been Kim, William Herlands
This is the Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems, held in Barcelona, Spain on December 9, 2016
no code implementations • 13 Nov 2015 • William Herlands, Andrew Wilson, Hannes Nickisch, Seth Flaxman, Daniel Neill, Wilbert van Panhuis, Eric Xing
We present a scalable Gaussian process model for identifying and characterizing smooth multidimensional changepoints, and automatically learning changes in expressive covariance structure.
no code implementations • 13 Nov 2015 • William Herlands, Maria De-Arteaga, Daniel Neill, Artur Dubrawski
We compute approximate solutions to L0 regularized linear regression using L1 regularization, also known as the Lasso, as an initialization step.