Object Modelling with a Handheld RGB-D Camera

21 May 2015  ·  Aitor Aldoma, Johann Prankl, Alexander Svejda, Markus Vincze ·

This work presents a flexible system to reconstruct 3D models of objects captured with an RGB-D sensor. A major advantage of the method is that our reconstruction pipeline allows the user to acquire a full 3D model of the object. This is achieved by acquiring several partial 3D models in different sessions that are automatically merged together to reconstruct a full model. In addition, the 3D models acquired by our system can be directly used by state-of-the-art object instance recognition and object tracking modules, providing object-perception capabilities for different applications, such as human-object interaction analysis or robot grasping. The system does not impose constraints in the appearance of objects (textured, untextured) nor in the modelling setup (moving camera with static object or a turn-table setup). The proposed reconstruction system has been used to model a large number of objects resulting in metrically accurate and visually appealing 3D models.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here