1 code implementation • NeurIPS 2019 • Matthew Staib, Stefanie Jegelka
We show that MMD DRO is roughly equivalent to regularization by the Hilbert norm and, as a byproduct, reveal deep connections to classic results in statistical learning.
no code implementations • 26 Jan 2019 • Matthew Staib, Sashank J. Reddi, Satyen Kale, Sanjiv Kumar, Suvrit Sra
Adaptive methods such as Adam and RMSProp are widely used in deep learning but are not well understood.
1 code implementation • 31 Dec 2018 • Edward Kim, Zach Jensen, Alexander van Grootel, Kevin Huang, Matthew Staib, Sheshera Mysore, Haw-Shiuan Chang, Emma Strubell, Andrew McCallum, Stefanie Jegelka, Elsa Olivetti
Leveraging new data sources is a key step in accelerating the pace of materials design and discovery.
no code implementations • 14 Feb 2018 • Matthew Staib, Bryan Wilder, Stefanie Jegelka
We also show compelling empirical evidence that DRO improves generalization to the unknown stochastic submodular function.
1 code implementation • NeurIPS 2017 • Matthew Staib, Sebastian Claici, Justin Solomon, Stefanie Jegelka
Our method is even robust to nonstationary input distributions and produces a barycenter estimate that tracks the input measures over time.
no code implementations • ICML 2017 • Matthew Staib, Stefanie Jegelka
The optimal allocation of resources for maximizing influence, spread of information or coverage, has gained attention in the past years, in particular in machine learning and data mining.