no code implementations • 8 Feb 2024 • Shigemichi Matsuzaki, Takuma Sugino, Kazuhito Tanaka, Zijun Sha, Shintaro Nakaoka, Shintaro Yoshizawa, Kazuhiro Shintani
This paper describes a multi-modal data association method for global localization using object-based maps and camera images.
no code implementations • 6 Jun 2023 • Shigemichi Matsuzaki, Kenji Koide, Shuji Oishi, Masashi Yokozuka, Atsuhiko Banno
This paper describes a method of global localization based on graph-theoretic association of instances between a query and the prior map.
1 code implementation • 2 Mar 2023 • Shigemichi Matsuzaki, Hiroaki Masuzawa, Jun Miura
This paper describes a method of domain adaptive training for semantic segmentation using multiple source datasets that are not necessarily relevant to the target dataset.
no code implementations • 13 Aug 2022 • Shigemichi Matsuzaki, Hiroaki Masuzawa, Jun Miura
This paper describes a method of online refinement of a scene recognition model for robot navigation considering traversable plants, flexible plant parts which a robot can push aside while moving.
no code implementations • 12 Feb 2021 • Shigemichi Matsuzaki, Jun Miura, Hiroaki Masuzawa
The core of our idea is to use multiple rich image datasets of different environments with segmentation labels to generate pseudo-labels for the target images to effectively transfer the knowledge from multiple sources and realize a precise training of semantic segmentation.