Feedback Assisted Adversarial Learning to Improve the Quality of Cone-beam CT Images

23 Oct 2022  ·  Takumi Hase, Megumi Nakao, Mitsuhiro Nakamura, Tetsuya Matsuda ·

Unsupervised image translation using adversarial learning has been attracting attention to improve the image quality of medical images. However, adversarial training based on the global evaluation values of discriminators does not provide sufficient translation performance for locally different image features. We propose adversarial learning with a feedback mechanism from a discriminator to improve the quality of CBCT images. This framework employs U-net as the discriminator and outputs a probability map representing the local discrimination results. The probability map is fed back to the generator and used for training to improve the image translation. Our experiments using 76 corresponding CT-CBCT images confirmed that the proposed framework could capture more diverse image features than conventional adversarial learning frameworks and produced synthetic images with pixel values close to the reference image and a correlation coefficient of 0.93.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods