no code implementations • 13 Apr 2024 • Anastasis Kratsios, Takashi Furuya, J. Antonio Lara B., Matti Lassas, Maarten de Hoop
In this paper, we construct a mixture of neural operators (MoNOs) between function spaces whose complexity is distributed over a network of expert neural operators (NOs), with each NO satisfying parameter scaling restrictions.
no code implementations • 5 Feb 2024 • Haitz Sáez de Ocáriz Borde, Takashi Furuya, Anastasis Kratsios, Marc T. Law
This improves the optimal bounds for traditional non-distributed deep learning models, namely ReLU MLPs, which need $\mathcal{O}(\varepsilon^{-n/2})$ parameters to achieve the same accuracy.
no code implementations • 2 Dec 2023 • Takashi Furuya, Satoshi Okuda, Kazuma Suetake, Yoshihide Sawada
This instability problem comes from the difficulty of the minimax optimization, and there have been various approaches in GANs and UDAs to overcome this problem.
1 code implementation • 27 Jan 2023 • J. Antonio Lara Benitez, Takashi Furuya, Florian Faucher, Anastasis Kratsios, Xavier Tricoche, Maarten V. de Hoop
We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.
no code implementations • 26 Feb 2022 • Takashi Furuya, Hiroyuki Kusumoto, Koichi Taniguchi, Naoya Kanno, Kazuma Suetake
Notably, Gal and Ghahramani [2016] proposed the approximate entropy that is the sum of the entropies of unimodal Gaussian distributions.
1 code implementation • 23 May 2021 • Takashi Furuya, Kazuma Suetake, Koichi Taniguchi, Hiroyuki Kusumoto, Ryuji Saiin, Tomohiro Daimon
Recurrent neural networks (RNNs) are a class of neural networks used in sequential tasks.