Search Results for author: Shiro Takagi

Found 9 papers, 3 papers with code

Speculative Exploration on the Concept of Artificial Agents Conducting Autonomous Research

1 code implementation6 Dec 2023 Shiro Takagi

This paper engages in a speculative exploration of the concept of an artificial agent capable of conducting research.

Towards Autonomous Hypothesis Verification via Language Models with Minimal Guidance

no code implementations16 Nov 2023 Shiro Takagi, Ryutaro Yamauchi, Wataru Kumagai

Research automation efforts usually employ AI as a tool to automate specific tasks within the research process.

On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning

2 code implementations17 Nov 2022 Shiro Takagi

We empirically investigate how pre-training on data of different modalities, such as language and vision, affects fine-tuning of Transformer-based models to Mujoco offline reinforcement learning tasks.

Convergence of neural networks to Gaussian mixture distribution

no code implementations26 Apr 2022 Yasuhiko Asao, Ryotaro Sakamoto, Shiro Takagi

We give a proof that, under relatively mild conditions, fully-connected feed-forward deep random neural networks converge to a Gaussian mixture distribution as only the width of the last hidden layer goes to infinity.

Image recognition via Vietoris-Rips complex

no code implementations6 Sep 2021 Yasuhiko Asao, Jumpei Nagase, Ryotaro Sakamoto, Shiro Takagi

By considering this weighted graph as a pseudo-metric space, we construct a Vietoris-Rips complex with a parameter $\varepsilon$ by a well-known process of algebraic topology.

Statistical Mechanical Analysis of Catastrophic Forgetting in Continual Learning with Teacher and Student Networks

no code implementations16 May 2021 Haruka Asanuma, Shiro Takagi, Yoshihiro Nagano, Yuki Yoshida, Yasuhiko Igarashi, Masato Okada

Teacher-student learning is a framework in which we introduce two neural networks: one neural network is a target function in supervised learning, and the other is a learning neural network.

Continual Learning

Localized Generations with Deep Neural Networks for Multi-Scale Structured Datasets

no code implementations25 Sep 2019 Yoshihiro Nagano, Shiro Takagi, Yuki Yoshida, Masato Okada

The local learning approach extracts semantic representations for these datasets by training the embedding model from scratch for each local neighborhood, respectively.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.