Paper

Diminishing Domain Bias by Leveraging Domain Labels in Object Detection on UAVs

Object detection from Unmanned Aerial Vehicles (UAVs) is of great importance in many aerial vision-based applications. Despite the great success of generic object detection methods, a significant performance drop is observed when applied to images captured by UAVs. This is due to large variations in imaging conditions, such as varying altitudes, dynamically changing viewing angles, and different capture times. These variations lead to domain imbalances and, thus, trained models suffering from domain bias. We demonstrate that domain knowledge is a valuable source of information and thus propose domain-aware object detectors by using freely accessible sensor data. By splitting the model into cross-domain and domain-specific parts, substantial performance improvements are achieved on multiple data sets across various models and metrics without changing the architecture. In particular, we achieve a new state-of-the-art performance on UAVDT for embedded real-time detectors. Furthermore, we create a new airborne image data set by annotating 13,713 objects in 2,900 images featuring precise altitude and viewing angle annotations.

Results in Papers With Code
(↓ scroll down to see all results)