Search Results for author: Maarten de Hoop

Found 10 papers, 4 papers with code

Mixture of Experts Soften the Curse of Dimensionality in Operator Learning

no code implementations13 Apr 2024 Anastasis Kratsios, Takashi Furuya, J. Antonio Lara B., Matti Lassas, Maarten de Hoop

In this paper, we construct a mixture of neural operators (MoNOs) between function spaces whose complexity is distributed over a network of expert neural operators (NOs), with each NO satisfying parameter scaling restrictions.

Operator learning

Beyond Hawkes: Neural Multi-event Forecasting on Spatio-temporal Point Processes

1 code implementation5 Nov 2022 Negar Erfanian, Santiago Segarra, Maarten de Hoop

Predicting discrete events in time and space has many scientific applications, such as predicting hazardous earthquakes and outbreaks of infectious diseases.

Point Processes

Deep Invertible Approximation of Topologically Rich Maps between Manifolds

no code implementations2 Oct 2022 Michael Puthawala, Matti Lassas, Ivan Dokmanic, Pekka Pankka, Maarten de Hoop

By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, and by using universal approximation arguments from machine learning, we find that a novel network of the form $\mathcal{T} \circ p \circ \mathcal{E}$, where $\mathcal{E}$ is an injective network, $p$ a fixed coordinate projection, and $\mathcal{T}$ a bijective network, is a universal approximator of local diffeomorphisms between compact smooth submanifolds embedded in $\mathbb{R}^n$.

Topological Data Analysis

Universal Joint Approximation of Manifolds and Densities by Simple Injective Flows

no code implementations8 Oct 2021 Michael Puthawala, Matti Lassas, Ivan Dokmanić, Maarten de Hoop

We show that in general, injective flows between $\mathbb{R}^n$ and $\mathbb{R}^m$ universally approximate measures supported on images of extendable embeddings, which are a subset of standard embeddings: when the embedding dimension m is small, topological obstructions may preclude certain manifolds as admissible targets.

Globally Injective ReLU Networks

no code implementations15 Jun 2020 Michael Puthawala, Konik Kothari, Matti Lassas, Ivan Dokmanić, Maarten de Hoop

Injectivity plays an important role in generative models where it enables inference; in inverse problems and compressed sensing with generative priors it is a precursor to well posedness.

Learning the geometry of wave-based imaging

1 code implementation NeurIPS 2020 Konik Kothari, Maarten de Hoop, Ivan Dokmanić

We propose a general physics-based deep learning architecture for wave-based imaging problems.

Inductive Bias Position

Learning Schatten--von Neumann Operators

no code implementations29 Jan 2019 Puoya Tabaghi, Maarten de Hoop, Ivan Dokmanić

We study the learnability of a class of compact operators known as Schatten--von Neumann operators.

Learning Theory

Inverse Problems with Invariant Multiscale Statistics

no code implementations18 Sep 2016 Ivan Dokmanić, Joan Bruna, Stéphane Mallat, Maarten de Hoop

We propose a new approach to linear ill-posed inverse problems.

Computational Engineering, Finance, and Science

Cannot find the paper you are looking for? You can Submit a new open access paper.