no code implementations • 22 Nov 2021 • Jung H Lee, Henry J Kvinge, Scott Howland, Zachary New, John Buckheit, Lauren A. Phillips, Elliott Skomski, Jessica Hibler, Courtney D. Corley, Nathan O. Hodas
Our empirical evaluations suggest that ATL can help DL models learn more efficiently, especially when available examples are limited.
no code implementations • 2 Jun 2021 • Henry Kvinge, Scott Howland, Nico Courts, Lauren A. Phillips, John Buckheit, Zachary New, Elliott Skomski, Jung H. Lee, Sandeep Tiwari, Jessica Hibler, Courtney D. Corley, Nathan O. Hodas
We describe how this problem is subtly different from out-of-distribution detection and describe a new method of identifying OOS examples within the Prototypical Networks framework using a fixed point which we call the generic representation.
no code implementations • 23 Sep 2020 • Henry Kvinge, Zachary New, Nico Courts, Jung H. Lee, Lauren A. Phillips, Courtney D. Corley, Aaron Tuor, Andrew Avila, Nathan O. Hodas
Few-shot learning algorithms, which seek to address this limitation, are designed to generalize well to new tasks with limited data.
no code implementations • 9 Oct 2018 • Craig Bakker, Michael J. Henry, Nathan O. Hodas
In this paper, we show that feedforward and recurrent neural networks exhibit an outer product derivative structure but that convolutional neural networks do not.
no code implementations • 13 May 2018 • Nathan O. Hodas, Panos Stinis
We show that adding structure to the neural network that enforces higher mutual information between layers speeds training and leads to more accurate results.
no code implementations • 12 Feb 2018 • Nathan Hilliard, Lawrence Phillips, Scott Howland, Artëm Yankov, Courtney D. Corley, Nathan O. Hodas
Learning high quality class representations from few examples is a key problem in metric-learning approaches to few-shot learning.
no code implementations • ICLR 2018 • Craig Bakker, Michael J. Henry, Nathan O. Hodas
Training methods for deep networks are primarily variants on stochastic gradient descent.
1 code implementation • 7 Dec 2017 • Garrett B. Goh, Charles Siegel, Abhinav Vishnu, Nathan O. Hodas
With access to large datasets, deep neural networks (DNN) have achieved human-level accuracy in image and speech recognition tasks.
4 code implementations • 6 Dec 2017 • Garrett B. Goh, Nathan O. Hodas, Charles Siegel, Abhinav Vishnu
Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics software.
2 code implementations • 5 Oct 2017 • Garrett B. Goh, Charles Siegel, Abhinav Vishnu, Nathan O. Hodas, Nathan Baker
The meteoric rise of deep learning models in computer vision research, having achieved human-level accuracy in image recognition tasks is firm evidence of the impact of representation learning of deep neural networks.
no code implementations • 22 Aug 2017 • Nathan Hilliard, Nathan O. Hodas, Courtney D. Corley
The ability to learn from a small number of examples has been a difficult problem in machine learning since its inception.
2 code implementations • 20 Jun 2017 • Garrett B. Goh, Charles Siegel, Abhinav Vishnu, Nathan O. Hodas, Nathan Baker
We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40, 000 compounds.
no code implementations • 17 Jan 2017 • Garrett B. Goh, Nathan O. Hodas, Abhinav Vishnu
The rise and fall of artificial neural networks is well documented in the scientific literature of both computer science and computational chemistry.
no code implementations • 17 Dec 2016 • Jacob S. Hunter, Nathan O. Hodas
In the present work we investigate the use of information theoretic measures such as mutual information and Kullback-Leibler (KL) divergence as objective functions for fitting such models without knowledge of the hidden layer.
no code implementations • 6 Nov 2016 • Ark Anderson, Kyle Shaffer, Artem Yankov, Court D. Corley, Nathan O. Hodas
In this paper we present a technique to train neural network models on small amounts of data.