Copenhagen, Denmark
Onsite/Online

ESTRO 2022

Session Item

Monday
May 09
10:30 - 11:30
Room D5
Deep learning for image analysis
Andre Dekker, The Netherlands;
Catarina Veiga, United Kingdom
Proffered Papers
Physics
11:10 - 11:20
CBCT-to-CT synthesis using weakly-paired cycle-consistent generative adversarial networks
Adam Szmul, United Kingdom
OC-0773

Abstract

CBCT-to-CT synthesis using weakly-paired cycle-consistent generative adversarial networks
Authors:

Adam Szmul1, Sabrina Taylor1, Pei Lim2, Jessica Cantwell2, Derek D’Souza2, Syed Moinuddin2, Mark Gaze2, Jennifer Gains2, Catarina Veiga1

1University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom; 2University College London Hospitals NHS Foundation Trust, Radiotherapy Department, London, United Kingdom

Show Affiliations
Purpose or Objective

Cycle-consistent generative adversarial networks (cycleGANsare a popular method for CBCT-to-CT synthesis for RT treatment verification and adaptationHowever, current approaches cannot guarantee sufficient structural consistency between source and synthetic images. Wpropose a novel framework for CBCT-to-CT synthesis in abdominal paediatric patients, a rare and diverse population where collecting large and representative datasets for data-driven approaches is challenging. 

Material and Methods

A total of 50 CT and 183 CBCT scans from 50 patients aged 2 to 22 treated for malignancies in the thoracic-abdominal-pelvic region were used in this study. The data were randomly split as 40/10 patients for training and evaluation, respectively (40/10CTs, 152/31CBCTs).


We propose a novel cycleGANs-based CBCT-to-CT synthesis pipeline where only global residuals are learned and predictedThis approach allows one to refinthe raw CBCTs by removing the unwanted artefacts, rather than generating new images inspired by the input. Wshowed this using UNet and ResNet architectures and then investigated different configuration options: with/without geometrical loss, and smart data selection based on the common (abdominal) field-of-view across the dataset, acting as weakly paired data approach.


A total of 10 configurations were trained (150 epochs) and evaluated. We compared our proposed framework to vanillcycleGANs (without smart data selection, geometrical loss or residual learning, with UNet and ResNet generators). Due to the lack of simultaneously acquired CT/CBCT scans, the synCTs were evaluated against two complementary ground-truths (GTs): the raw CBCT and a virtual CT (vCT). The vCT consisted of the CT deformably registered to the CBCTs using NiftyReg. In vCTs the original gas regions were replaced with water intensity and then gas regions were added from CBCTs. Three global image similarity metrics were calculated between synCTs and GTs: sum of squared differences (SSD), normalised cross-correlation (NCC) and root mean square error (RMSE)Metrics were calculated within the common field-of-view, the same as was used for smart data selection.

Results

A summary of all metrics calculated is shown in Tab.1, showing improvements in the agreement with vCT and CBCT after adding proposed extensions for both tested generator architecturesThe worst scores were observed for vanilla cycleGANs and the best for the configurations incorporating all the proposed extensions. Our proposed cycleGAN workflow improved the anatomical consistency between source and synthetic images compared to the vanilla counterpart, Fig.1.




Conclusion

The proposed cycleGAN-based workflow resulted in improved synthetic results. Using global residuals with geometrical loss and weakly paired data via smart data selection was agnostic to the generator architecture and improved the performance of CBCT-to-CT synthesis task.