Pretext-Invariant Representation Learning (PIRL, pronounced as “pearl”) learns invariant representations based on pretext tasks. PIRL is used with a commonly used pretext task that involves solving jigsaw puzzles. Specifically, PIRL constructs image representations that are similar to the representation of transformed versions of the same image and different from the representations of other images.
Source: Self-Supervised Learning of Pretext-Invariant RepresentationsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Self-Supervised Learning | 4 | 15.38% |
Object Detection | 3 | 11.54% |
Image Classification | 2 | 7.69% |
Semantic Segmentation | 2 | 7.69% |
Translation | 1 | 3.85% |
Navigate | 1 | 3.85% |
Reinforcement Learning (RL) | 1 | 3.85% |
Robot Navigation | 1 | 3.85% |
Zero-shot Generalization | 1 | 3.85% |