Image Enhanced Rotation Prediction for Self-Supervised Learning

25 Dec 2019  ·  Shin'ya Yamaguchi, Sekitoshi Kanai, Tetsuya Shioda, Shoichiro Takeda ·

The rotation prediction (Rotation) is a simple pretext-task for self-supervised learning (SSL), where models learn useful representations for target vision tasks by solving pretext-tasks. Although Rotation captures information of object shapes, it hardly captures information of textures. To tackle this problem, we introduce a novel pretext-task called image enhanced rotation prediction (IE-Rot) for SSL. IE-Rot simultaneously solves Rotation and another pretext-task based on image enhancement (e.g., sharpening and solarizing) while maintaining simplicity. Through the simultaneous prediction of rotation and image enhancement, models learn representations to capture the information of not only object shapes but also textures. Our experimental results show that IE-Rot models outperform Rotation on various standard benchmarks including ImageNet classification, PASCAL-VOC detection, and COCO detection/segmentation.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here