1 code implementation • 3 Apr 2024 • Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, Dylan Hadfield-Menell
Interpretability techniques are valuable for helping humans understand and oversee AI systems.
no code implementations • 31 Mar 2023 • Marissa Connor, Bruno Olshausen, Christopher Rozell
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images.
no code implementations • 5 Dec 2022 • Marissa Connor, Vincent Emanuele
Semi-supervised learning methods can train high-accuracy machine learning models with a fraction of the labeled training samples required for traditional supervised learning.
1 code implementation • 22 Jun 2021 • Marissa Connor, Kion Fallah, Christopher Rozell
However, these approaches are limited because they require transformation labels when training their models and they lack a method for determining which regions of the manifold are appropriate for applying each specific operator.
2 code implementations • NeurIPS 2020 • Matthew O'Shaughnessy, Gregory Canal, Marissa Connor, Mark Davenport, Christopher Rozell
Our objective function encourages both the generative model to faithfully represent the data distribution and the latent factors to have a large causal influence on the classifier output.
no code implementations • 5 Dec 2019 • Marissa Connor, Christopher Rozell
Deep generative networks have been widely used for learning mappings from a low-dimensional latent space to a high-dimensional data space.
no code implementations • ICLR 2018 • Marissa Connor, Christopher Rozell
The main contribution of this paper is to define two transfer learning methods that use this generative manifold representation to learn natural transformations and incorporate them into new data.