Paper

Multi-modal Self-Supervision from Generalized Data Transformations

The recent success of self-supervised learning can be largely attributed to content-preserving transformations, which can be used to easily induce invariances. While transformations generate positive sample pairs in contrastive loss training, most recent work focuses on developing new objective formulations, and pays relatively little attention to the transformations themselves... (read more)

Results in Papers With Code
(↓ scroll down to see all results)