no code implementations • 16 Mar 2023 • Jorio Cocola, John Tencer, Francesco Rizzi, Eric Parish, Patrick Blonigan
In this work, we propose and analyze a novel method that overcomes this disadvantage by training a neural network only on subsampled versions of the high-fidelity solution snapshots.
no code implementations • 1 Jan 2021 • Kevin M. Potter, Steven Richard Sleder, Matthew David Smith, John Tencer
The new layer achieves a test error rate of 0. 80% on the MNIST superpixel dataset, beating the closest reported rate of 0. 95% by a factor of more than 15%.
3 code implementations • 24 Sep 2020 • Francesco Rizzi, Eric J. Parish, Patrick J. Blonigan, John Tencer
This work introduces a reformulation, called rank-2 Galerkin, of the Galerkin ROM for LTI dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound.
Computational Physics Computational Engineering, Finance, and Science Distributed, Parallel, and Cluster Computing Mathematical Software Dynamical Systems
no code implementations • 11 Jun 2020 • John Tencer, Kevin Potter
Our custom graph convolution operators based on the available differential operators for a given spatial discretization effectively extend the application space of deep convolutional autoencoders to systems with arbitrarily complex geometry that are typically discretized using unstructured meshes.