Unveiling Spaces: Architecturally meaningful semantic descriptions from images of interior spaces

19 Dec 2023  ·  Demircan Tas, Rohit Priyadarshi Sanatani ·

There has been a growing adoption of computer vision tools and technologies in architectural design workflows over the past decade. Notable use cases include point cloud generation, visual content analysis, and spatial awareness for robotic fabrication. Multiple image classification, object detection, and semantic pixel segmentation models have become popular for the extraction of high-level symbolic descriptions and semantic content from two-dimensional images and videos. However, a major challenge in this regard has been the extraction of high-level architectural structures (walls, floors, ceilings windows etc.) from diverse imagery where parts of these elements are occluded by furniture, people, or other non-architectural elements. This project aims to tackle this problem by proposing models that are capable of extracting architecturally meaningful semantic descriptions from two-dimensional scenes of populated interior spaces. 1000 virtual classrooms are parametrically generated, randomized along key spatial parameters such as length, width, height, and door/window positions. The positions of cameras, and non-architectural visual obstructions (furniture/objects) are also randomized. A Generative Adversarial Network (GAN) for image-to-image translation (Pix2Pix) is trained on synthetically generated rendered images of these enclosures, along with corresponding image abstractions representing high-level architectural structure. The model is then tested on unseen synthetic imagery of new enclosures, and outputs are compared to ground truth using pixel-wise comparison for evaluation. A similar model evaluation is also carried out on photographs of existing indoor enclosures, to measure its performance in real-world settings.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here