no code implementations • 16 Jan 2024 • Lei Duan, Ziyang Jiang, David Carlson
We show that the proposed data augmentation strategy helps enhance the performance of the state-of-the-art convolutional neural network-random forest (CNN-RF) model by a reasonable amount, resulting in a noteworthy improvement in spatial correlation and a reduction in prediction error.
no code implementations • 13 Jun 2023 • Ziyang Jiang, Yiling Liu, Michael H. Klein, Ahmed Aloui, Yiman Ren, Keyu Li, Vahid Tarokh, David Carlson
This is important in many scientific applications to identify the underlying mechanisms of a treatment effect.
no code implementations • 3 Feb 2023 • Yiling Liu, Juncheng Dong, Ziyang Jiang, Ahmed Aloui, Keyu Li, Hunter Klein, Vahid Tarokh, David Carlson
To address this limitation, we propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
1 code implementation • 26 Jan 2023 • Ziyang Jiang, Zhuoran Hou, Yiling Liu, Yiman Ren, Keyu Li, David Carlson
A number of methods have been proposed for causal effect estimation, yet few have demonstrated efficacy in handling data with complex structures, such as images.
1 code implementation • 15 May 2022 • Ziyang Jiang, Tongshu Zheng, Yiling Liu, David Carlson
Many deep learning applications could be enhanced by modeling such known properties.
Ranked #1 on Gaussian Processes on UCI POWER
no code implementations • 13 May 2022 • Tianhui Zhou, William E. Carson IV, Michael Hunter Klein, David Carlson
Finally, we justify our approach by providing theoretical analyses that demonstrate that MDCN improves on the generalization bound of the new, unobserved target center.
1 code implementation • 7 Jan 2022 • William E. Carson IV, Austin Talbot, David Carlson
Deep autoencoders are often extended with a supervised or adversarial loss to learn latent representations with desirable properties, such as greater predictivity of labels and outcomes or fairness with respects to a sensitive variable.
no code implementations • NeurIPS 2021 • Neil Gallagher, Kafui Dzirasa, David Carlson
We prove that it is compatible with the implicit assumptions of linear factor models, and we provide a method to estimate the DS.
no code implementations • 4 Oct 2021 • Tianhui Zhou, William E Carson IV, David Carlson
However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or simplistic assumptions about these distributions.
no code implementations • 24 Sep 2021 • William E. Carson IV, Dmitry Isaev, Samatha Major, Guillermo Sapiro, Geraldine Dawson, David Carlson
Second, we show this same model can be used to learn a disentangled representation of multimodal biomarkers that results in an increase in predictive performance.
1 code implementation • 9 Sep 2021 • Liyun Tu, Austin Talbot, Neil Gallagher, David Carlson
We demonstrate the effectiveness of these developments using synthetic data and electrophysiological recordings with an emphasis on how our learned representations can be used to design scientific experiments.
1 code implementation • 10 Apr 2020 • Austin Talbot, David Dunson, Kafui Dzirasa, David Carlson
Targeted stimulation of the brain has the potential to treat mental illnesses.
1 code implementation • 12 Feb 2020 • Tianhui Zhou, Yitong Li, Yuan Wu, David Carlson
We address these challenges by proposing a novel method to capture predictive distributions in regression by defining two neural networks with two distinct loss functions.
1 code implementation • 5 Oct 2019 • Pengyu Cheng, Yitong Li, Xinyuan Zhang, Liqun Cheng, David Carlson, Lawrence Carin
The relative importance of global versus local structure for the embeddings is learned automatically.
no code implementations • 4 Jun 2019 • Cynthia Rudin, David Carlson
9) There is a method to the madness of deep neural architectures, but not always.
1 code implementation • CVPR 2019 • Yitong Li, Zhe Gan, Yelong Shen, Jingjing Liu, Yu Cheng, Yuexin Wu, Lawrence Carin, David Carlson, Jianfeng Gao
We therefore propose a new story-to-image-sequence generation model, StoryGAN, based on the sequential conditional GAN framework.
1 code implementation • ICML 2017 • Ari Pakman, Dar Gilboa, David Carlson, Liam Paninski
We introduce a novel stochastic version of the non-reversible, rejection-free Bouncy Particle Sampler (BPS), a Markov process whose sample trajectories are piecewise linear.
no code implementations • 7 Mar 2016 • David Carlson, Patrick Stinson, Ari Pakman, Liam Paninski
Partition functions of probability distributions are important quantities for model evaluation and comparisons.
1 code implementation • 25 Dec 2015 • Changyou Chen, David Carlson, Zhe Gan, Chunyuan Li, Lawrence Carin
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied.
no code implementations • 23 Dec 2015 • Chunyuan Li, Changyou Chen, David Carlson, Lawrence Carin
Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace and more
no code implementations • 13 Nov 2015 • Josh Merel, David Carlson, Liam Paninski, John P. Cunningham
We describe how training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available.
1 code implementation • NeurIPS 2015 • Zhe Gan, Chunyuan Li, Ricardo Henao, David Carlson, Lawrence Carin
Deep dynamic generative models are developed to learn sequential dependencies in time-series data.