Lane Detection
84 papers with code • 11 benchmarks • 15 datasets
Lane Detection is a computer vision task that involves identifying the boundaries of driving lanes in a video or image of a road scene. The goal is to accurately locate and track the lane markings in real-time, even in challenging conditions such as poor lighting, glare, or complex road layouts.
Lane detection is an important component of advanced driver assistance systems (ADAS) and autonomous vehicles, as it provides information about the road layout and the position of the vehicle within the lane, which is crucial for navigation and safety. The algorithms typically use a combination of computer vision techniques, such as edge detection, color filtering, and Hough transforms, to identify and track the lane markings in a road scene.
( Image credit: End-to-end Lane Detection )
Libraries
Use these libraries to find Lane Detection models and implementationsLatest papers with no code
LaneCorrect: Self-supervised Lane Detection
Lane detection has evolved highly functional autonomous driving system to understand driving scenes even under complex environments.
How to deal with glare for improved perception of Autonomous Vehicles
In this paper, we investigate various glare reduction techniques, including the proposed saturated pixel-aware glare reduction technique for improved performance of the computer vision (CV) tasks employed by the perception layer of AVs.
Sparse Laneformer
We analyze that dense anchors are not necessary for lane detection, and propose a transformer-based lane detection framework based on a sparse anchor mechanism.
Monocular 3D lane detection for Autonomous Driving: Recent Achievements, Challenges, and Outlooks
This review looks back and analyzes the current state of achievements in the field of 3D lane detection research.
ENet-21: An Optimized light CNN Structure for Lane Detection
Lane detection for autonomous vehicles is an important concept, yet it is a challenging issue of driver assistance systems in modern vehicles.
TwinLiteNetPlus: A Stronger Model for Real-time Drivable Area and Lane Segmentation
Semantic segmentation is crucial for autonomous driving, particularly for Drivable Area and Lane Segmentation, ensuring safety and navigation.
LDTR: Transformer-based Lane Detection with Anchor-chain Representation
Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving.
SparseFusion: Efficient Sparse Multi-Modal Fusion Framework for Long-Range 3D Perception
The versatility of SparseFusion is also validated in the temporal object detection task and 3D lane detection task.
A Survey of Vision Transformers in Autonomous Driving: Current Trends and Future Directions
This survey explores the adaptation of visual transformer models in Autonomous Driving, a transition inspired by their success in Natural Language Processing.
LanePtrNet: Revisiting Lane Detection as Point Voting and Grouping on Curves
from object detection and segmentation tasks, while these approaches require manual adjustments for curved objects, involve exhaustive searches on predefined anchors, require complex post-processing steps, and may lack flexibility when applied to real-world scenarios. In this paper, we propose a novel approach, LanePtrNet, which treats lane detection as a process of point voting and grouping on ordered sets: Our method takes backbone features as input and predicts a curve-aware centerness, which represents each lane as a point and assigns the most probable center point to it.