Search Results for author: Kathryn Lindsey

Found 4 papers, 1 papers with code

Hidden symmetries of ReLU networks

no code implementations9 Jun 2023 J. Elisenda Grigsby, Kathryn Lindsey, David Rolnick

The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation?

Functional dimension of feedforward ReLU neural networks

no code implementations8 Sep 2022 J. Elisenda Grigsby, Kathryn Lindsey, Robert Meyerhoff, Chenxi Wu

It is well-known that the parameterized family of functions representable by fully-connected feedforward neural networks with ReLU activation function is precisely the class of piecewise linear functions with finitely many pieces.

Local and global topological complexity measures OF ReLU neural network functions

no code implementations12 Apr 2022 J. Elisenda Grigsby, Kathryn Lindsey, Marissa Masden

We apply a generalized piecewise-linear (PL) version of Morse theory due to Grunert-Kuhnel-Rote to define and study new local and global notions of topological complexity for fully-connected feedforward ReLU neural network functions, F: R^n -> R. Along the way, we show how to construct, for each such F, a canonical polytopal complex K(F) and a deformation retract of the domain onto K(F), yielding a convenient compact model for performing calculations.

On transversality of bent hyperplane arrangements and the topological expressiveness of ReLU neural networks

1 code implementation20 Aug 2020 J. Elisenda Grigsby, Kathryn Lindsey

We use this obstruction to prove that a decision region of a generic, transversal ReLU network F: R^n -> R with a single hidden layer of dimension (n + 1) can have no more than one bounded connected component.

Binary Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.