Search Results for author: Atsushi Yamashita

Found 19 papers, 8 papers with code

WHAC: World-grounded Humans and Cameras

1 code implementation19 Mar 2024 Wanqi Yin, Zhongang Cai, Ruisi Wang, Fanzhou Wang, Chen Wei, Haiyi Mei, Weiye Xiao, Zhitao Yang, Qingping Sun, Atsushi Yamashita, Ziwei Liu, Lei Yang

In this study, we aim to recover expressive parametric human models (i. e., SMPL-X) and corresponding camera poses jointly, by leveraging the synergy between three critical players: the world, the human, and the camera.

Pose Estimation

Learning Pseudo Front Depth for 2D Forward-Looking Sonar-based Multi-view Stereo

1 code implementation30 Jul 2022 Yusheng Wang, Yonghoon Ji, Hiroshi Tsuchiya, Hajime Asama, Atsushi Yamashita

However, owing to the unique image formulation principle, estimating 3D information from a single image faces severe ambiguity problems.

Efficient Video Deblurring Guided by Motion Magnitude

3 code implementations27 Jul 2022 Yusheng Wang, Yunfan Lu, Ye Gao, Lin Wang, Zhihang Zhong, Yinqiang Zheng, Atsushi Yamashita

Video deblurring is a highly under-constrained problem due to the spatially and temporally varying blur.

Deblurring Optical Flow Estimation

ReIL: A Framework for Reinforced Intervention-based Imitation Learning

no code implementations29 Mar 2022 Rom Parnichkun, Matthew N. Dailey, Atsushi Yamashita

Compared to traditional imitation learning methods such as DAgger and DART, intervention-based imitation offers a more convenient and sample efficient data collection process to users.

Imitation Learning Robot Navigation

Neural Hybrid Automata: Learning Dynamics with Multiple Modes and Stochastic Transitions

no code implementations NeurIPS 2021 Michael Poli, Stefano Massaroli, Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Atsushi Yamashita, Hajime Asama, Jinkyoo Park, Animesh Garg

Effective control and prediction of dynamical systems often require appropriate handling of continuous-time and discrete, event-triggered processes.

Learning Stochastic Optimal Policies via Gradient Descent

no code implementations7 Jun 2021 Stefano Massaroli, Michael Poli, Stefano Peluchetti, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization of parametric control policies.

Portfolio Optimization

Optimal Energy Shaping via Neural Approximators

no code implementations14 Jan 2021 Stefano Massaroli, Michael Poli, Federico Califano, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We introduce optimal energy shaping as an enhancement of classical passivity-based control methods.

TorchDyn: A Neural Differential Equations Library

no code implementations20 Sep 2020 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

Continuous-depth learning has recently emerged as a novel perspective on deep learning, improving performance in tasks related to dynamical systems and density estimation.

Density Estimation

Hypersolvers: Toward Fast Continuous-Depth Models

1 code implementation NeurIPS 2020 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

The infinite-depth paradigm pioneered by Neural ODEs has launched a renaissance in the search for novel dynamical system-inspired deep learning primitives; however, their utilization in problems of non-trivial size has often proved impossible due to poor computational scalability.

360$^\circ$ Depth Estimation from Multiple Fisheye Images with Origami Crown Representation of Icosahedron

1 code implementation14 Jul 2020 Ren Komatsu, Hiromitsu Fujii, Yusuke Tamura, Atsushi Yamashita, Hajime Asama

Our proposed method is robust to camera alignments by using the extrinsic camera parameters; therefore, it can achieve precise depth estimation even when the camera alignment differs from that in the training dataset.

Depth Estimation

Stable Neural Flows

no code implementations18 Mar 2020 Stefano Massaroli, Michael Poli, Michelangelo Bin, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

We introduce a provably stable variant of neural ordinary differential equations (neural ODEs) whose trajectories evolve on an energy functional parametrised by a neural network.

Port-Hamiltonian Gradient Flows

no code implementations ICLR Workshop DeepDiffEq 2019 Michael Poli, Stefano Massaroli, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

In this paper we present a general framework for continuous--time gradient descent, often referred to as gradient flow.

Dissecting Neural ODEs

1 code implementation NeurIPS 2020 Stefano Massaroli, Michael Poli, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

Continuous deep learning architectures have recently re-emerged as Neural Ordinary Differential Equations (Neural ODEs).

Graph Neural Ordinary Differential Equations

1 code implementation18 Nov 2019 Michael Poli, Stefano Massaroli, Junyoung Park, Atsushi Yamashita, Hajime Asama, Jinkyoo Park

We introduce the framework of continuous--depth graph neural networks (GNNs).

Port-Hamiltonian Approach to Neural Network Training

2 code implementations6 Sep 2019 Stefano Massaroli, Michael Poli, Federico Califano, Angela Faragasso, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations.

Time Series Forecasting

Cannot find the paper you are looking for? You can Submit a new open access paper.