Illiterate DALL$\cdot$E Learns to Compose

ICLR 2022  ·  Gautam Singh, Fei Deng, Sungjin Ahn ·

DALL$\cdot$E has shown an impressive ability of composition-based systematic generalization in image generation. This is possible because it utilizes the dataset of text-image pairs where the text provides the source of compositionality. Following this result, an important extending question is whether this compositionality can still be achieved even without conditioning on text. In this paper, we propose an architecture called $\textit{Slot2Seq}$ that achieves this text-free DALL$\cdot$E by learning compositional slot-based representations purely from images, an ability lacking in DALL$\cdot$E. Unlike existing object-centric representation models that decode pixels independently for each slot and each pixel location and compose them via mixture-based alpha composition, we propose to use the Image GPT decoder conditioned on the slots for a more flexible generation by capturing complex interaction among the pixels and the slots. In experiments, we show that this simple architecture achieves zero-shot generation of novel images without text and better quality in generation than the models based on mixture decoders.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods