Search Results for author: Akiyoshi Sannai

Found 16 papers, 2 papers with code

Unification of Symmetries Inside Neural Networks: Transformer, Feedforward and Neural ODE

no code implementations4 Feb 2024 Koji Hashimoto, Yuji Hirono, Akiyoshi Sannai

Understanding the inner workings of neural networks, including transformers, remains one of the most challenging puzzles in machine learning.

Integrating Large Language Models in Causal Discovery: A Statistical Causal Approach

1 code implementation2 Feb 2024 Masayuki Takayama, Tadahisa Okuda, Thong Pham, Tatsuyoshi Ikenoue, Shingo Fukuma, Shohei Shimizu, Akiyoshi Sannai

In practical statistical causal discovery (SCD), embedding domain expert knowledge as constraints into the algorithm is widely accepted as significant for creating consistent meaningful causal models, despite the recognized challenges in systematic acquisition of the background knowledge.

Causal Discovery Causal Inference +2

A Policy Gradient Primal-Dual Algorithm for Constrained MDPs with Uniform PAC Guarantees

1 code implementation31 Jan 2024 Toshinori Kitamura, Tadashi Kozuno, Masahiro Kato, Yuki Ichihara, Soichiro Nishimori, Akiyoshi Sannai, Sho Sonoda, Wataru Kumagai, Yutaka Matsuo

We study a primal-dual reinforcement learning (RL) algorithm for the online constrained Markov decision processes (CMDP) problem, wherein the agent explores an optimal policy that maximizes return while satisfying constraints.

Reinforcement Learning (RL)

LPML: LLM-Prompting Markup Language for Mathematical Reasoning

no code implementations21 Sep 2023 Ryutaro Yamauchi, Sho Sonoda, Akiyoshi Sannai, Wataru Kumagai

In this paper, we propose a novel framework that integrates the Chain-of-Thought (CoT) method with an external tool (Python REPL).

Mathematical Reasoning

Bézier Flow: a Surface-wise Gradient Descent Method for Multi-objective Optimization

no code implementations23 May 2022 Akiyoshi Sannai, Yasunari Hikima, Ken Kobayashi, Akinori Tanaka, Naoki Hamada

In this paper, we propose a strategy to construct a multi-objective optimization algorithm from a single-objective optimization algorithm by using the B\'ezier simplex model.

PAC learning

Equivariant and Invariant Reynolds Networks

no code implementations15 Oct 2021 Akiyoshi Sannai, Makoto Kawano, Wataru Kumagai

We construct learning models based on the reductive Reynolds operator called equivariant and invariant Reynolds networks (ReyNets) and prove that they have universal approximation property.

Reynolds Equivariant and Invariant Networks

no code implementations29 Sep 2021 Akiyoshi Sannai, Makoto Kawano, Wataru Kumagai

To overcome this difficulty, we consider representing the Reynolds operator as a sum over a subset instead of a sum over the whole group.

Approximate Bayesian Computation of Bézier Simplices

no code implementations10 Apr 2021 Akinori Tanaka, Akiyoshi Sannai, Ken Kobayashi, Naoki Hamada

B\'ezier simplex fitting algorithms have been recently proposed to approximate the Pareto set/front of multi-objective continuous optimization problems.

Group Equivariant Conditional Neural Processes

no code implementations ICLR 2021 Makoto Kawano, Wataru Kumagai, Akiyoshi Sannai, Yusuke Iwasawa, Yutaka Matsuo

We present the group equivariant conditional neural process (EquivCNP), a meta-learning method with permutation invariance in a data set as in conventional conditional neural processes (CNPs), and it also has transformation equivariance in data space.

Meta-Learning Translation +1

Universal Approximation Theorem for Equivariant Maps by Group CNNs

no code implementations27 Dec 2020 Wataru Kumagai, Akiyoshi Sannai

However, universal approximation theorems for CNNs have been separately derived with individual techniques according to each group and setting.

On the Number of Linear Functions Composing Deep Neural Network: Towards a Refined Definition of Neural Networks Complexity

no code implementations23 Oct 2020 Yuuki Takai, Akiyoshi Sannai, Matthieu Cordonnier

The classical approach to measure the expressive power of deep neural networks with piecewise linear activations is based on counting their maximum number of linear regions.

Relation

Improved Generalization Bounds of Group Invariant / Equivariant Deep Networks via Quotient Feature Spaces

no code implementations15 Oct 2019 Akiyoshi Sannai, Masaaki Imaizumi, Makoto Kawano

To describe the effect of invariance and equivariance on generalization, we develop a notion of a \textit{quotient feature space}, which measures the effect of group actions for the properties.

Generalization Bounds

Improved Generalization Bound of Permutation Invariant Deep Neural Networks

no code implementations25 Sep 2019 Akiyoshi Sannai, Masaaki Imaizumi

Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks.

Asymptotic Risk of Bezier Simplex Fitting

no code implementations17 Jun 2019 Akinori Tanaka, Akiyoshi Sannai, Ken Kobayashi, Naoki Hamada

In this paper, we analyze the asymptotic risks of those B\'ezier simplex fitting methods and derive the optimal subsample ratio for the inductive skeleton fitting.

Universal approximations of permutation invariant/equivariant functions by deep neural networks

no code implementations5 Mar 2019 Akiyoshi Sannai, Yuuki Takai, Matthieu Cordonnier

In this paper, we develop a theory about the relationship between $G$-invariant/equivariant functions and deep neural networks for finite group $G$.

Reconstruction of training samples from loss functions

no code implementations18 May 2018 Akiyoshi Sannai

Furthermore, as as application of this theory, we prove that the loss functions can reconstruct the inputs of the training samples up to scalar multiplication (as vectors) and can provide the number of layers and nodes of the deep neural network.

Cannot find the paper you are looking for? You can Submit a new open access paper.