Search Results for author: J. Antonio Lara B.

Found 1 papers, 0 papers with code

Mixture of Experts Soften the Curse of Dimensionality in Operator Learning

no code implementations13 Apr 2024 Anastasis Kratsios, Takashi Furuya, J. Antonio Lara B., Matti Lassas, Maarten de Hoop

In this paper, we construct a mixture of neural operators (MoNOs) between function spaces whose complexity is distributed over a network of expert neural operators (NOs), with each NO satisfying parameter scaling restrictions.

Operator learning

Cannot find the paper you are looking for? You can Submit a new open access paper.