GLIGEN: Open-Set Grounded Text-to-Image Generation

Large-scale text-to-image diffusion models have made amazing advances. However, the status quo is to use text input alone, which can impede controllability. In this work, we propose GLIGEN, Grounded-Language-to-Image Generation, a novel approach that builds upon and extends the functionality of existing pre-trained text-to-image diffusion models by enabling them to also be conditioned on grounding inputs. To preserve the vast concept knowledge of the pre-trained model, we freeze all of its weights and inject the grounding information into new trainable layers via a gated mechanism. Our model achieves open-world grounded text2img generation with caption and bounding box condition inputs, and the grounding ability generalizes well to novel spatial configurations and concepts. GLIGEN's zero-shot performance on COCO and LVIS outperforms that of existing supervised layout-to-image baselines by a large margin.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Conditional Text-to-Image Synthesis COCO-MIG Gligen (zero-shot) instance success rate 0.30 # 4
mIoU 0.27 # 4
Text-to-Image Generation MS COCO GLIGEN (fine-tuned, Detection + Caption data) FID 5.61 # 5
Text-to-Image Generation MS COCO GLIGEN (fine-tuned, Grounding data) FID 6.38 # 10
Text-to-Image Generation MS COCO GLIGEN (fine-tuned, Detection data only) FID 5.82 # 6

Methods