Search Results for author: John Tencer

Found 4 papers, 1 papers with code

Hyper-Reduced Autoencoders for Efficient and Accurate Nonlinear Model Reductions

no code implementations16 Mar 2023 Jorio Cocola, John Tencer, Francesco Rizzi, Eric Parish, Patrick Blonigan

In this work, we propose and analyze a novel method that overcomes this disadvantage by training a neural network only on subsampled versions of the high-fidelity solution snapshots.

Parameterized Pseudo-Differential Operators for Graph Convolutional Neural Networks

no code implementations1 Jan 2021 Kevin M. Potter, Steven Richard Sleder, Matthew David Smith, John Tencer

The new layer achieves a test error rate of 0. 80% on the MNIST superpixel dataset, beating the closest reported rate of 0. 95% by a factor of more than 15%.

Position Superpixel Image Classification

A compute-bound formulation of Galerkin model reduction for linear time-invariant dynamical systems

3 code implementations24 Sep 2020 Francesco Rizzi, Eric J. Parish, Patrick J. Blonigan, John Tencer

This work introduces a reformulation, called rank-2 Galerkin, of the Galerkin ROM for LTI dynamical systems which converts the nature of the ROM problem from memory bandwidth to compute bound.

Computational Physics Computational Engineering, Finance, and Science Distributed, Parallel, and Cluster Computing Mathematical Software Dynamical Systems

A Tailored Convolutional Neural Network for Nonlinear Manifold Learning of Computational Physics Data using Unstructured Spatial Discretizations

no code implementations11 Jun 2020 John Tencer, Kevin Potter

Our custom graph convolution operators based on the available differential operators for a given spatial discretization effectively extend the application space of deep convolutional autoencoders to systems with arbitrarily complex geometry that are typically discretized using unstructured meshes.

Cannot find the paper you are looking for? You can Submit a new open access paper.