no code implementations • 28 Mar 2024 • S. J. Ben Yoo, Luis El-Srouji, Suman Datta, Shimeng Yu, Jean Anne Incorvia, Alberto Salleo, Volker Sorger, Juejun Hu, Lionel C Kimerling, Kristofer Bouchard, Joy Geng, Rishidev Chaudhuri, Charan Ranganath, Randall O'Reilly
The human brain has immense learning capabilities at extreme energy efficiencies and scale that no artificial system has been able to match.
no code implementations • 14 Oct 2022 • Hector Garcia Martin, Tijana Radivojevic, Jeremy Zucker, Kristofer Bouchard, Jess Sustarich, Sean Peisert, Dan Arnold, Nathan Hillson, Gyorgy Babnigg, Jose Manuel Marti, Christopher J. Mungall, Gregg T. Beckham, Lucas Waldburger, James Carothers, Shivshankar Sundaram, Deb Agarwal, Blake A. Simmons, Tyler Backman, Deepanwita Banerjee, Deepti Tanjore, Lavanya Ramakrishnan, Anup Singh
Self-driving labs (SDLs) combine fully automated experiments with artificial intelligence (AI) that decides the next set of experiments.
no code implementations • 3 Mar 2022 • Rui Meng, Tianyi Luo, Kristofer Bouchard
The key insight of our framework is to learn representations by minimizing the compression complexity and maximizing the predictive information in latent space.
no code implementations • 27 Nov 2021 • Luca Pion-Tonachini, Kristofer Bouchard, Hector Garcia Martin, Sean Peisert, W. Bradley Holtz, Anil Aswani, Dipankar Dwivedi, Haruko Wainwright, Ghanshyam Pilania, Benjamin Nachman, Babetta L. Marrone, Nicola Falco, Prabhat, Daniel Arnold, Alejandro Wolf-Yadlin, Sarah Powers, Sharlee Climer, Quinn Jackson, Ty Carlson, Michael Sohn, Petrus Zwart, Neeraj Kumar, Amy Justice, Claire Tomlin, Daniel Jacobson, Gos Micklem, Georgios V. Gkoutos, Peter J. Bickel, Jean-Baptiste Cazier, Juliane Müller, Bobbie-Jo Webb-Robertson, Rick Stevens, Mark Anderson, Ken Kreutz-Delgado, Michael W. Mahoney, James B. Brown
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery.
no code implementations • 25 Jun 2021 • Rui Meng, Kristofer Bouchard
Stochastic linear mixing models (SLMM) assume the mixture coefficients depend on input, making them more flexible and effective to capture complex output dependence.
no code implementations • 1 Jun 2021 • Rui Meng, Herbie Lee, Kristofer Bouchard
This paper presents an efficient variational inference framework for deriving a family of structured gaussian process regression network (SGPRN) models.
no code implementations • L4DC 2020 • Trevor Ruiz, Sharmodeep Bhattacharyya, Mahesh Balasubramanian, Kristofer Bouchard
A well-known feature of RML inference is that in general the technique produces a trade-off between sparsity and bias that depends on the choice of the regularization hyperparameter.
1 code implementation • NeurIPS 2019 • David Clark, Jesse Livezey, Kristofer Bouchard
Linear dimensionality reduction methods are commonly used to extract low-dimensional structure from high-dimensional data.
no code implementations • 21 Aug 2018 • Mahesh Balasubramanian, Trevor Ruiz, Brandon Cook, Sharmodeep Bhattacharyya, Prabhat, Aviral Shrivastava, Kristofer Bouchard
The analysis of scientific data of increasing size and complexity requires statistical machine learning methods that are both interpretable and predictive.
no code implementations • 1 Jun 2018 • Tayo Ajayi, David Mildebrath, Anastasios Kyrillidis, Shashanka Ubaru, Georgios Kollias, Kristofer Bouchard
We present theoretical results on the convergence of \emph{non-convex} accelerated gradient descent in matrix factorization models with $\ell_2$-norm loss.
1 code implementation • 30 Apr 2015 • Joaquin Rapela, Mark Kostuk, Peter F. Rowat, Tim Mullen, Edward F. Chang, Kristofer Bouchard
Here we demonstrate that the activity of neural ensembles can be quantitatively modeled.
Neurons and Cognition