1 code implementation • 25 Mar 2018 • Austin C. Kozlowski, Matt Taddy, James A. Evans
We demonstrate the utility of a new methodological tool, neural-network word embedding models, for large-scale text analysis, revealing how these models produce richer insights into cultural associations and categories than possible with prior methods.
1 code implementation • NeurIPS 2019 • Jonas Mueller, Vasilis Syrgkanis, Matt Taddy
We consider dynamic pricing with many products under an evolving but low-dimensional demand model.
no code implementations • 28 Dec 2017 • Vira Semenova, Matt Goldman, Victor Chernozhukov, Matt Taddy
The first step of our procedure is orthogonalization, where we partial out the controls and unit effects from the outcome and the base treatment and take the cross-fitted residuals.
1 code implementation • ICML 2018 • Yash Deshpande, Lester Mackey, Vasilis Syrgkanis, Matt Taddy
Estimators computed from adaptively collected data do not behave like their non-adaptive brethren.
1 code implementation • ICML 2017 • Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy
Counterfactual prediction requires understanding causal relationships between so-called treatment and outcome variables.
no code implementations • WS 2017 • Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Zou
We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i. e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner.
no code implementations • 30 Dec 2016 • Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy
We are in the middle of a remarkable rise in the use and capability of artificial intelligence.
1 code implementation • IJCNLP 2015 • Matt Taddy
There have been many recent advances in the structure and measurement of distributed language models: those that map from words to a vector-space that is rich in information about word choice and composition.
no code implementations • 8 Feb 2015 • Matt Taddy, Chun-Sheng Chen, Jun Yu, Mitch Wyle
We derive ensembles of decision trees through a nonparametric Bayesian model, allowing us to view random forests as samples from a posterior distribution.
Applications
no code implementations • 9 Dec 2010 • Matt Taddy
Multinomial inverse regression is introduced as a general tool for simplifying predictor sets that can be represented as draws from a multinomial distribution, and we show that logistic regression of phrase counts onto document annotations can be used to obtain low dimension document representations that are rich in sentiment information.
Methodology