Nowadays, three-dimensional (3D) meshes are widely used in various applications in different areas (e.g., industry, education, entertainment and safety). The 3D models are captured with multiple RGB-D sensors, and the sampled geometric manifolds are processed, compressed, simplified, stored, and transmitted to be reconstructed in a virtual space. These low-level processing applications require the accurate representation of the 3D models that can be achieved through saliency estimation mechanisms that identify specific areas of the 3D model representing surface patches of importance. Therefore, saliency maps guide the selection of feature locations facilitating the prioritization of 3D manifold segments and attributing to vertices more bits during compression or lower decimation probability during simplification, since compression and simplification are counterparts of the same process. In this work, we present a novel deep saliency mapping approach applied to 3D meshes, emphasizing decreasing the execution time of the saliency map estimation, especially when compared with the corresponding time by other relevant approaches. Our method utilizes baseline 3D importance maps to train convolutional neural networks. Furthermore, we present applications that utilize the extracted saliency, namely feature-aware multiscale compression and simplification frameworks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here