no code implementations • 9 Jun 2023 • J. Elisenda Grigsby, Kathryn Lindsey, David Rolnick
The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation?
no code implementations • 8 Sep 2022 • J. Elisenda Grigsby, Kathryn Lindsey, Robert Meyerhoff, Chenxi Wu
It is well-known that the parameterized family of functions representable by fully-connected feedforward neural networks with ReLU activation function is precisely the class of piecewise linear functions with finitely many pieces.
no code implementations • 12 Apr 2022 • J. Elisenda Grigsby, Kathryn Lindsey, Marissa Masden
We apply a generalized piecewise-linear (PL) version of Morse theory due to Grunert-Kuhnel-Rote to define and study new local and global notions of topological complexity for fully-connected feedforward ReLU neural network functions, F: R^n -> R. Along the way, we show how to construct, for each such F, a canonical polytopal complex K(F) and a deformation retract of the domain onto K(F), yielding a convenient compact model for performing calculations.
1 code implementation • 20 Aug 2020 • J. Elisenda Grigsby, Kathryn Lindsey
We use this obstruction to prove that a decision region of a generic, transversal ReLU network F: R^n -> R with a single hidden layer of dimension (n + 1) can have no more than one bounded connected component.