no code implementations • 25 Feb 2024 • Nadav Dym, Hannah Lawrence, Jonathan W. Siegel
Canonicalization provides an architecture-agnostic method for enforcing equivariance, with generalizations such as frame-averaging recently gaining prominence as a lightweight and flexible alternative to equivariant architectures.
no code implementations • 3 Jan 2024 • Bobak T. Kiani, Thien Le, Hannah Lawrence, Stefanie Jegelka, Melanie Weber
We study the problem of learning equivariant neural networks via gradient descent.
no code implementations • 4 Dec 2023 • Hannah Lawrence, Mitchell Tong Harris
Moreover, we observe that these polynomial learning problems are equivariant to the non-compact group $SL(2,\mathbb{R})$, which consists of area-preserving linear transformations.
1 code implementation • 17 Jul 2023 • Xuan Zhang, Limei Wang, Jacob Helwig, Youzhi Luo, Cong Fu, Yaochen Xie, Meng Liu, Yuchao Lin, Zhao Xu, Keqiang Yan, Keir Adams, Maurice Weiler, Xiner Li, Tianfan Fu, Yucheng Wang, Haiyang Yu, Yuqing Xie, Xiang Fu, Alex Strasser, Shenglong Xu, Yi Liu, Yuanqi Du, Alexandra Saxton, Hongyi Ling, Hannah Lawrence, Hannes Stärk, Shurui Gui, Carl Edwards, Nicholas Gao, Adriana Ladera, Tailin Wu, Elyssa F. Hofgard, Aria Mansouri Tehrani, Rui Wang, Ameya Daigavane, Montgomery Bohde, Jerry Kurtin, Qian Huang, Tuong Phung, Minkai Xu, Chaitanya K. Joshi, Simon V. Mathis, Kamyar Azizzadenesheli, Ada Fang, Alán Aspuru-Guzik, Erik Bekkers, Michael Bronstein, Marinka Zitnik, Anima Anandkumar, Stefano Ermon, Pietro Liò, Rose Yu, Stephan Günnemann, Jure Leskovec, Heng Ji, Jimeng Sun, Regina Barzilay, Tommi Jaakkola, Connor W. Coley, Xiaoning Qian, Xiaofeng Qian, Tess Smidt, Shuiwang Ji
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences.
1 code implementation • NeurIPS 2023 • Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann Lecun, Bobak T. Kiani
Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering.
1 code implementation • 12 Oct 2022 • Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, Philippe Rigollet
Comparing the representations learned by different neural networks has recently emerged as a key tool to understand various architectures and ultimately optimize them.
1 code implementation • 29 Jun 2022 • Saachi Jain, Hannah Lawrence, Ankur Moitra, Aleksander Madry
Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model's failure modes.
1 code implementation • 12 Oct 2021 • Hannah Lawrence, Kristian Georgiev, Andrew Dienes, Bobak T. Kiani
Group equivariant convolutional neural networks (G-CNNs) are generalizations of convolutional neural networks (CNNs) which excel in a wide range of technical applications by explicitly encoding symmetries, such as rotations and permutations, in their architectures.
no code implementations • 29 Sep 2021 • Hannah Lawrence, Ankur Moitra
There is a rich literature on recovering data from limited measurements under the assumption of sparsity in some basis, whether known (compressed sensing) or unknown (dictionary learning).
no code implementations • 1 Jan 2021 • Hannah Lawrence, David Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié
To the best of our knowledge, this is the first work to consider a dataset-free machine learning approach for holographic phase retrieval.
1 code implementation • 14 Dec 2020 • Hannah Lawrence, David A. Barmherzig, Henry Li, Michael Eickenberg, Marylou Gabrié
Phase retrieval is the inverse problem of recovering a signal from magnitude-only Fourier measurements, and underlies numerous imaging modalities, such as Coherent Diffraction Imaging (CDI).
no code implementations • NeurIPS 2020 • Lin Chen, Qian Yu, Hannah Lawrence, Amin Karbasi
To establish the dimension-independent upper bound, we next show that a mini-batching algorithm provides an $ O(\frac{T}{\sqrt{K}}) $ upper bound, and therefore conclude that the minimax regret of switching-constrained OCO is $ \Theta(\frac{T}{\sqrt{K}}) $ for any $K$.