Search Results for author: Eric C. Cyr

Found 7 papers, 4 papers with code

Graph Neural Networks and Applied Linear Algebra

1 code implementation21 Oct 2023 Nicholas S. Moore, Eric C. Cyr, Peter Ohm, Christopher M. Siefert, Raymond S. Tuminaro

With the recent interest in scientific machine learning, it is natural to ask how sparse matrix computations can leverage neural networks (NN).

Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences

1 code implementation7 Mar 2022 Gordon Euhyun Moon, Eric C. Cyr

Parallelizing Gated Recurrent Unit (GRU) networks is a challenging task, as the training procedure of GRU is inherently sequential.

Partition of unity networks: deep hp-approximation

no code implementations27 Jan 2021 Kookjin Lee, Nathaniel A. Trask, Ravi G. Patel, Mamikon A. Gulian, Eric C. Cyr

Approximation theorists have established best-in-class optimal approximation rates of deep neural networks by utilizing their ability to simultaneously emulate partitions of unity and monomials.

Unity

A physics-informed operator regression framework for extracting data-driven continuum models

1 code implementation25 Sep 2020 Ravi G. Patel, Nathaniel A. Trask, Mitchell A. Wood, Eric C. Cyr

The application of deep learning toward discovery of data-driven models requires careful application of inductive biases to obtain a description of physics which is both accurate and robust.

regression

A block coordinate descent optimizer for classification problems exploiting convexity

no code implementations17 Jun 2020 Ravi G. Patel, Nathaniel A. Trask, Mamikon A. Gulian, Eric C. Cyr

By alternating between a second-order method to find globally optimal parameters for the linear layer and gradient descent to train the hidden layers, we ensure an optimal fit of the adaptive basis to data throughout training.

Classification General Classification +2

Multilevel Initialization for Layer-Parallel Deep Neural Network Training

1 code implementation19 Dec 2019 Eric C. Cyr, Stefanie Günther, Jacob B. Schroder

This paper investigates multilevel initialization strategies for training very deep neural networks with a layer-parallel multigrid solver.

Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

no code implementations10 Dec 2019 Eric C. Cyr, Mamikon A. Gulian, Ravi G. Patel, Mauro Perego, Nathaniel A. Trask

Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.