You Only Look at Once for Real-time and Generic Multi-Task

2 Oct 2023  ·  Jiayuan Wang, Q. M. Jonathan Wu, Ning Zhang ·

High precision, lightweight, and real-time responsiveness are three essential requirements for implementing autonomous driving. In this study, we present an adaptive, real-time, and lightweight multi-task model designed to concurrently address object detection, drivable area segmentation, and lane line segmentation tasks. Specifically, we developed an end-to-end multi-task model with a unified and streamlined segmentation structure. We introduced a learnable parameter that adaptively concatenate features in segmentation necks, using the same loss function for all segmentation tasks. This eliminates the need for customizations and enhances the model's generalization capabilities. We also introduced a segmentation head composed only of a series of convolutional layers, which reduces the inference time. We achieved competitive results on the BDD100k dataset, particularly in visualization outcomes. The performance results show a mAP50 of 81.1% for object detection, a mIoU of 91.0% for drivable area segmentation, and an IoU of 28.8% for lane line segmentation. Additionally, we introduced real-world scenarios to evaluate our model's performance in a real scene, which significantly outperforms competitors. This demonstrates that our model not only exhibits competitive performance but is also more flexible and faster than existing multi-task models. The source codes and pre-trained models are released at https://github.com/JiayuanWang-JW/YOLOv8-multi-task

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Lane Detection BDD100K val A-YOLOM(s) Accuracy 84.9 # 3
IoU (%) 28.8 # 4
Drivable Area Detection BDD100K val A-YOLOM(s) mIoU 91 # 5
Traffic Object Detection BDD100K val A-YOLOM(s) Recall 86.9 # 6
mAP50 81.1 # 3

Methods


No methods listed for this paper. Add relevant methods here