no code implementations • 30 Jul 2023 • Yusheng Wang, Yonghoon Ji, Chujie Wu, Hiroshi Tsuchiya, Hajime Asama, Atsushi Yamashita
A well-known problem in this field is estimating missing information in the elevation direction during sonar imaging.
1 code implementation • 30 Jul 2022 • Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Hajime Asama, Atsushi Yamashita
However, owing to the unique image formulation principle, estimating 3D information from a single image faces severe ambiguity problems.
no code implementations • 22 Jun 2021 • Michael Poli, Stefano Massaroli, Clayton M. Rabideau, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
We introduce the framework of continuous-depth graph neural networks (GNNs).
no code implementations • NeurIPS 2021 • Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg
Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.
no code implementations • NeurIPS 2021 • Stefano Massaroli, Michael Poli, Sho Sonoda, Taji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We detail a novel class of implicit neural models.
no code implementations • 7 Jun 2021 • Stefano Massaroli, Michael Poli, Stefano Peluchetti, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies.
no code implementations • 14 Jan 2021 • Stefano Massaroli, Michael Poli, Federico Califano, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We introduce optimal energy shaping as an enhancement of classical passivity-based control methods.
no code implementations • 20 Sep 2020 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
Continuous-depth learning has recently emerged as a novel perspective on deep learning, improving performance in tasks related to dynamical systems and density estimation.
1 code implementation • NeurIPS 2020 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
The infinite-depth paradigm pioneered by Neural ODEs has launched a renaissance in the search for novel dynamical system-inspired deep learning primitives; however, their utilization in problems of non-trivial size has often proved impossible due to poor computational scalability.
1 code implementation • 14 Jul 2020 • Ren Komatsu, Hiromitsu Fujii, Yusuke Tamura, Atsushi Yamashita, Hajime Asama
Our proposed method is robust to camera alignments by using the extrinsic camera parameters; therefore, it can achieve precise depth estimation even when the camera alignment differs from that in the training dataset.
no code implementations • 18 Mar 2020 • Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
In this paper we present a general framework for continuous--time gradient descent, often referred to as gradient flow.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Stefano Massaroli, Michael Poli, Sanzhar Bakhtiyarov, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
Action spaces equipped with parameter sets are a common occurrence in reinforcement learning applications.
Hierarchical Reinforcement Learning reinforcement-learning +1
1 code implementation • NeurIPS 2020 • Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs).
1 code implementation • 18 Nov 2019 • Michael Poli, Stefano Massaroli, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park
We introduce the framework of continuous--depth graph neural networks (GNNs).
2 code implementations • 6 Sep 2019 • Stefano Massaroli, Michael Poli, Federico Califano, Angela Faragasso, Jinkyoo Park, Atsushi Yamashita, Hajime Asama
Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations.