Scene Generation
65 papers with code • 5 benchmarks • 8 datasets
Libraries
Use these libraries to find Scene Generation models and implementationsMost implemented papers
Semantic Palette: Guiding Scene Generation with Class Proportions
Despite the recent progress of generative adversarial networks (GANs) at synthesizing photo-realistic images, producing complex urban scenes remains a challenging problem.
Neural Scene Decoration from a Single Photograph
In this paper, we introduce a new problem of domain-specific indoor scene image synthesis, namely neural scene decoration.
Indoor Scene Generation from a Collection of Semantic-Segmented Depth Images
Different from existing methods that represent an indoor scene with the type, location, and other properties of objects in the room and learn the scene layout from a collection of complete 3D indoor scenes, our method models each indoor scene as a 3D semantic scene volume and learns a volumetric generative adversarial network (GAN) from a collection of 2. 5D partial observations of 3D scenes.
MIGS: Meta Image Generation from Scene Graphs
We propose MIGS (Meta Image Generation from Scene Graphs), a meta-learning based approach for few-shot image generation from graphs that enables adapting the model to different scenes and increases the image quality by training on diverse sets of tasks.
LUMINOUS: Indoor Scene Generation for Embodied AI Challenges
However, current simulators for Embodied AI (EAI) challenges only provide simulated indoor scenes with a limited number of layouts.
Compositional Transformers for Scene Generation
We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling.
Compositional Transformers for Scene Generation
We introduce the GANformer2 model, an iterative object-oriented transformer, explored for the task of generative modeling.
Vehicle trajectory prediction works, but not everywhere
We further show that the generated scenes (i) are realistic since they do exist in the real world, and (ii) can be used to make existing models more robust, yielding 30-40 reductions in the off-road rate.
Risk-Aware Scene Sampling for Dynamic Assurance of Autonomous Systems
Our samplers of RNS and GBO sampled a higher percentage of high-risk scenes of 83% and 92%, compared to 56%, 66% and 71% of the grid, random and Halton samplers, respectively.
Modeling Image Composition for Complex Scene Generation
Compared to existing CNN-based and Transformer-based generation models that entangled modeling on pixel-level&patch-level and object-level&patch-level respectively, the proposed focal attention predicts the current patch token by only focusing on its highly-related tokens that specified by the spatial layout, thereby achieving disambiguation during training.