3D Object Detection
585 papers with code • 55 benchmarks • 48 datasets
3D Object Detection is a task in computer vision where the goal is to identify and locate objects in a 3D environment based on their shape, location, and orientation. It involves detecting the presence of objects and determining their location in the 3D space in real-time. This task is crucial for applications such as autonomous vehicles, robotics, and augmented reality.
( Image credit: AVOD )
Libraries
Use these libraries to find 3D Object Detection models and implementationsSubtasks
Most implemented papers
PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation
We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information.
PointSeg: Real-Time Semantic Segmentation Based on 3D LiDAR Point Cloud
We take the spherical image, which is transformed from the 3D LiDAR point clouds, as input of the convolutional neural networks (CNNs) to predict the point-wise semantic map.
MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation
We tackle the fundamentally ill-posed problem of 3D human localization from monocular RGB images.
Class-balanced Grouping and Sampling for Point Cloud 3D Object Detection
This report presents our method which wins the nuScenes3D Detection Challenge [17] held in Workshop on Autonomous Driving(WAD, CVPR 2019).
Deep Learning for 3D Point Clouds: A Survey
To stimulate future research, this paper presents a comprehensive review of recent progress in deep learning methods for point clouds.
SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint Estimation
Estimating 3D orientation and translation of objects is essential for infrastructure-less autonomous navigation and driving.
V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction
In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection
In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection.
Objects are Different: Flexible Monocular 3D Object Detection
The precise localization of 3D objects from a single image without depth information is a highly challenging problem.
HoughNet: Integrating near and long-range evidence for visual detection
This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method.