|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information.
Estimating 3D orientation and translation of objects is essential for infrastructure-less autonomous navigation and driving.
Following the pipeline of two-stage 3D detection algorithms, we detect 2D object proposals in the input image and extract a point cloud frustum from the pseudo-LiDAR for each proposal.
This allows us to reason holistically about the spatial configuration of the scene in a domain where scale is consistent and distances between objects are meaningful.
We present MonoPSR, a monocular 3D object detection method that leverages proposals and shape reconstruction.
In this paper we propose an approach for monocular 3D object detection from a single RGB image, which leverages a novel disentangling transformation for 2D and 3D detection losses and a novel, self-supervised confidence score for 3D bounding boxes.