1 code implementation • 1 Jul 2022 • Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson
Masked autoencoding has become a successful pretraining paradigm for Transformer models for text, images, and, recently, point clouds.