no code implementations • 8 May 2024 • Sander Elias Magnussen Helgesen, Kazuto Nakashima, Jim Tørresen, Ryo Kurazume
Existing approaches have shown the possibilities for using diffusion models to generate refined LiDAR data with high fidelity, although the performance and speed of such methods have been limited.
1 code implementation • 17 Sep 2023 • Kazuto Nakashima, Ryo Kurazume
In this work, we present R2DM, a novel generative model for LiDAR data that can generate diverse and high-fidelity 3D scene point clouds based on the image representation of range and reflectance intensity.
1 code implementation • 21 Oct 2022 • Kazuto Nakashima, Yumi Iwashita, Ryo Kurazume
We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models.
no code implementations • 21 Oct 2022 • Shoko Miyauchi, Ken'ichi Morooka, Ryo Kurazume
Moreover, the unified mesh structure of isomorphic meshes enables the same process to be applied to all isomorphic meshes; although in the case of general mesh models, we need to consider the processes depending on their mesh structures.
1 code implementation • 23 Feb 2021 • Kazuto Nakashima, Ryo Kurazume
As in the related studies, we process LiDAR data as a compact yet lossless representation, a cylindrical depth map.