no code implementations • 9 Dec 2023 • Yujie Wu, Giovanni Parmigiani, Boyu Ren
First, we extend a flexible single-source DA algorithm for classification through outcome-coarsening to enable its application to regression problems.
1 code implementation • 1 Jun 2023 • Cathy Shyr, Boyu Ren, Prasad Patil, Giovanni Parmigiani
To this end, we propose a framework for multi-study HTE estimation that accounts for between-study heterogeneity in the nuisance functions and treatment effects.
no code implementations • 30 Apr 2023 • Giovanni Parmigiani
In this article I propose an approach for defining replicability for prediction rules.
1 code implementation • 16 Dec 2022 • Gabriel Loewinger, Kayhan Behdin, Kenneth T. Kishida, Giovanni Parmigiani, Rahul Mazumder
Allowing the regression coefficients of tasks to have different sparsity patterns (i. e., different supports), we propose a modeling framework for MTL that encourages models to share information across tasks, for a given covariate, through separately 1) shrinking the coefficient supports together, and/or 2) shrinking the coefficient values together.
1 code implementation • 11 Jul 2022 • Cathy Shyr, Pragya Sur, Giovanni Parmigiani, Prasad Patil
In the regression setting, we provide theoretical guidelines based on an analytical transition point to determine whether it is more beneficial to merge or to ensemble for boosting with linear learners.
1 code implementation • 19 Sep 2021 • Gabriel Loewinger, Rolando Acosta Nunez, Rahul Mazumder, Giovanni Parmigiani
Importantly, our approach outperforms multi-study stacking and other standard methods in this application.
1 code implementation • 25 Jun 2021 • Zoe Guan, Giovanni Parmigiani, Danielle Braun, Lorenzo Trippa
We validate the models using data from the Cancer Genetics Network.
1 code implementation • 17 May 2021 • Maya Ramchandran, Rajarshi Mukherjee, Giovanni Parmigiani
Adapting machine learning algorithms to better handle clustering or batch effects within training data sets is important across a wide variety of biological applications.
1 code implementation • 13 May 2021 • Theodore Huang, Gregory Idos, Christine Hong, Stephen Gruber, Giovanni Parmigiani, Danielle Braun
Via simulations we show that integration of gradient boosting with an existing Mendelian model can produce an improved model that outperforms both that model and the model built using gradient boosting alone.
no code implementations • 20 Jun 2020 • Zhun Deng, Frances Ding, Cynthia Dwork, Rachel Hong, Giovanni Parmigiani, Prasad Patil, Pragya Sur
We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps.
1 code implementation • 17 May 2019 • Zoe Guan, Giovanni Parmigiani, Prasad Patil
A critical decision point when training predictors using multiple studies is whether these studies should be combined or treated separately.
no code implementations • 24 Apr 2019 • Yujia Bao, Zhengyi Deng, Yan Wang, Heeyoon Kim, Victor Diego Armengol, Francisco Acevedo, Nofal Ouardaoui, Cathy Wang, Giovanni Parmigiani, Regina Barzilay, Danielle Braun, Kevin S. Hughes
We developed and evaluated two machine learning models to classify abstracts as relevant to the penetrance (risk of cancer for germline mutation carriers) or prevalence of germline genetic mutations.
1 code implementation • 4 Mar 2019 • Giovanni Parmigiani
The fuzzy ROC extends Receiver Operating Curve (ROC) visualization to the situation where some data points, falling in an indeterminacy region, are not classified.
1 code implementation • 25 Apr 2017 • Joseph Antonelli, Giovanni Parmigiani, Francesca Dominici
In observational studies, estimation of a causal effect of a treatment on an outcome relies on proper adjustment for confounding.
Methodology