Search Results for author: Takashi Furuya

Found 6 papers, 2 papers with code

Mixture of Experts Soften the Curse of Dimensionality in Operator Learning

no code implementations13 Apr 2024 Anastasis Kratsios, Takashi Furuya, J. Antonio Lara B., Matti Lassas, Maarten de Hoop

In this paper, we construct a mixture of neural operators (MoNOs) between function spaces whose complexity is distributed over a network of expert neural operators (NOs), with each NO satisfying parameter scaling restrictions.

Operator learning

Breaking the Curse of Dimensionality with Distributed Neural Computation

no code implementations5 Feb 2024 Haitz Sáez de Ocáriz Borde, Takashi Furuya, Anastasis Kratsios, Marc T. Law

This improves the optimal bounds for traditional non-distributed deep learning models, namely ReLU MLPs, which need $\mathcal{O}(\varepsilon^{-n/2})$ parameters to achieve the same accuracy.

Convergences for Minimax Optimization Problems over Infinite-Dimensional Spaces Towards Stability in Adversarial Training

no code implementations2 Dec 2023 Takashi Furuya, Satoshi Okuda, Kazuma Suetake, Yoshihide Sawada

This instability problem comes from the difficulty of the minimax optimization, and there have been various approaches in GANs and UDAs to overcome this problem.

Theoretical Error Analysis of Entropy Approximation for Gaussian Mixture

no code implementations26 Feb 2022 Takashi Furuya, Hiroyuki Kusumoto, Koichi Taniguchi, Naoya Kanno, Kazuma Suetake

Notably, Gal and Ghahramani [2016] proposed the approximate entropy that is the sum of the entropies of unimodal Gaussian distributions.

Variational Inference

Spectral Pruning for Recurrent Neural Networks

1 code implementation23 May 2021 Takashi Furuya, Kazuma Suetake, Koichi Taniguchi, Hiroyuki Kusumoto, Ryuji Saiin, Tomohiro Daimon

Recurrent neural networks (RNNs) are a class of neural networks used in sequential tasks.

Edge-computing

Cannot find the paper you are looking for? You can Submit a new open access paper.