Combining visibility analysis and deep learning for refinement of semantic 3D building models by conflict classification

10 Mar 2023  ·  Olaf Wysocki, Eleonora Grilli, Ludwig Hoegner, Uwe Stilla ·

Semantic 3D building models are widely available and used in numerous applications. Such 3D building models display rich semantics but no fa\c{c}ade openings, chiefly owing to their aerial acquisition techniques. Hence, refining models' fa\c{c}ades using dense, street-level, terrestrial point clouds seems a promising strategy. In this paper, we propose a method of combining visibility analysis and neural networks for enriching 3D models with window and door features. In the method, occupancy voxels are fused with classified point clouds, which provides semantics to voxels. Voxels are also used to identify conflicts between laser observations and 3D models. The semantic voxels and conflicts are combined in a Bayesian network to classify and delineate fa\c{c}ade openings, which are reconstructed using a 3D model library. Unaffected building semantics is preserved while the updated one is added, thereby upgrading the building model to LoD3. Moreover, Bayesian network results are back-projected onto point clouds to improve points' classification accuracy. We tested our method on a municipal CityGML LoD2 repository and the open point cloud datasets: TUM-MLS-2016 and TUM-FA\c{C}ADE. Validation results revealed that the method improves the accuracy of point cloud semantic segmentation and upgrades buildings with fa\c{c}ade elements. The method can be applied to enhance the accuracy of urban simulations and facilitate the development of semantic segmentation algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here