Relational Reasoning Network (RRN) for Anatomical Landmarking

Purpose: We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Towards this, we propose a new simple yet efficient deep network architecture, called \textit{relational reasoning network (RRN)}, to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach: The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results: We applied RRN to cone beam computed tomography scans obtained from 250 patients. With a 4-fold cross validation technique, we obtained an average root mean squared error of less than 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring several \textit{reasoning} about informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformation are present in the bones. Conclusions: Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation based approaches, where segmentation failure (as often the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first of its kind algorithm finding anatomical relations of the objects using deep learning.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here