no code implementations • 8 Mar 2024 • Geonho Bang, Kwangjin Choi, Jisong Kim, Dongsuk Kum, Jun Won Choi
The inherent noisy and sparse characteristics of radar data pose challenges in finding effective representations for 3D object detection.
no code implementations • 17 Jul 2023 • Jisong Kim, Minjae Seong, Geonho Bang, Dongsuk Kum, Jun Won Choi
While LiDAR sensors have been successfully applied to 3D object detection, the affordability of radar and camera sensors has led to a growing interest in fusing radars and cameras for 3D object detection.
Ranked #4 on 3D Object Detection on nuscenes Camera-Radar
1 code implementation • ICCV 2023 • Sanmin Kim, Youngseok Kim, In-Jae Lee, Dongsuk Kum
To address this limitation, we propose a novel 3D object detection model, P2D (Predict to Detect), that integrates a prediction scheme into a detection framework to explicitly extract and leverage motion features.
1 code implementation • ICCV 2023 • Youngseok Kim, Juyeb Shin, Sanmin Kim, In-Jae Lee, Jun Won Choi, Dongsuk Kum
Autonomous driving requires an accurate and fast 3D perception system that includes 3D object detection, tracking, and segmentation.
Ranked #2 on 3D Multi-Object Tracking on nuscenes Camera-Radar
no code implementations • 10 Jan 2023 • Juyeb Shin, Francois Rameau, Hyeonjun Jeong, Dongsuk Kum
We represent the map elements as a graph; we propose InstaGraM, instance-level graph modeling of HD map that brings accurate and fast end-to-end vectorized HD map learning.
1 code implementation • 24 Nov 2022 • Yecheol Kim, Konyul Park, Minwook Kim, Dongsuk Kum, Jun Won Choi
Fusing data from cameras and LiDAR sensors is an essential technique to achieve robust 3D object detection.
Ranked #1 on 3D Object Detection on KITTI Cars Hard
no code implementations • 29 Oct 2022 • Youngseok Kim, Sanmin Kim, Sangmin Sim, Jun Won Choi, Dongsuk Kum
In this way, our 3D detection network can be supervised by more depth supervision from raw LiDAR points, which does not require any human annotation cost, to estimate accurate depth without explicitly predicting the depth map.
no code implementations • 14 Sep 2022 • Youngseok Kim, Sanmin Kim, Jun Won Choi, Dongsuk Kum
Camera and radar sensors have significant advantages in cost, reliability, and maintenance compared to LiDAR.
Ranked #6 on 3D Object Detection on nuscenes Camera-Radar
no code implementations • 17 Jun 2022 • Sanmin Kim, Hyeongseok Jeon, Junwon Choi, Dongsuk Kum
Prior arts in the field of motion predictions for autonomous driving tend to focus on finding a trajectory that is close to the ground truth trajectory.
1 code implementation • CVPR 2021 • ByeoungDo Kim, Seong Hyeon Park, Seokhwan Lee, Elbek Khoshimjonov, Dongsuk Kum, Junsoo Kim, Jeong Soo Kim, Jun Won Choi
In this paper, we address the problem of predicting the future motion of a dynamic agent (called a target agent) given its current and past states as well as the information on its environment.
no code implementations • 28 Feb 2020 • Hyeongseok Jeon, Junwon Choi, Dongsuk Kum
Since there is no pre-defined number of interacting vehicles participate in, the prediction network has to be scalable with respect to the vehicle number in order to guarantee the consistency in terms of both accuracy and computational load.
no code implementations • 1 Aug 2019 • Jin Hyeok Yoo, Dongsuk Kum, Jun Won Choi
Convolutional neural network (CNN) has led to significant progress in object detection.