A neural network to create super-resolution MR from multiple 2D brain scans of paediatric patients
Jose Benitez-Aurioles,
United Kingdom
MO-0222
Abstract
A neural network to create super-resolution MR from multiple 2D brain scans of paediatric patients
Authors: Jose Benitez-Aurioles1, Angela Davey2, Marianne Aznar2, Abigail Bryce-Atkinson2, Eliana M. Vásquez Osorio2, Shermaine Pan3, Peter Sitch3, Marcel Van Herk2
1University of Manchester, Division of Informatics, Imaging and Data Sciences, Manchester, United Kingdom; 2University of Manchester, Division of Cancer Sciences, Manchester, United Kingdom; 3The Christie NHS Foundation Trust, Department of Proton Therapy, Manchester, United Kingdom
Show Affiliations
Hide Affiliations
Purpose or Objective
High-resolution (HR) MRI provides
detailed soft-tissue information that is useful in assessing long-term
side-effects after radiotherapy in childhood cancer survivors, such as facial
asymmetry or morphological changes in brain structures. However, 3D HRMRI requires
long acquisition times, so in practice often multiple 2D low-resolution (LR) images
(with thick slices in multiple planes) are
acquired for patient follow-up.
In this work, we present a super-resolution (SR)
convolutional neural network (CNN) which can reconstruct a HR 3D image from 2D
LR images, in order to improve the extraction of structural biomarkers from
routine scans.
Material and Methods
A multi-level densely connected super-resolution CNN [1] was
adapted to take two perpendicular LR scans (e.g., coronal and axial) as tensors
and reconstruct a 3D HR image. Scans were resampled to a resolution of 1mm3
before being fed into the network. A training set of 80 HR T1 paediatric (9-10years,
healthy subjects) head scans from the Adolescent Brain Cognitive Development (ABCD)
study was used as baseline, and 2D LR images were simulated to use as input
into the CNN (Figure 1). 10 additional scans from ABCD were used to tune the
hyperparameters of the CNN. The output of the model (imagesCNN) was
compared against simple interpolation (resampling and averaging both inputs), (imagesinterp).
The evaluation was done in two steps. First, the quality of
the reconstructed HR images was assessed using the peak signal-to-noise ratio
(PSNR) (larger values indicate better
quality) compared to baseline. Secondly, the precision of structure
segmentation (using the autocontouring software Limbus AI) in the reconstructed
vs the baseline HR images was assessed using mean distance-to-agreement (mDTA).
As Limbus AI is not validated for paediatric data, we carefully inspected the
segmentations.
Three datasets were available: 1) 10 new ABCD images
(dataset 1); 2) 18 images from the Children’s Brain Tumour Network (CBTN) study
(acquired HR and simulated LR images, age 2–20years, dataset 2) and 3) 6 “real-world”
follow-up images of a paediatric head and neck cancer patient (acquired HR and
acquired LR, 14-19years, dataset 3).
Results
The proposed CNN outperformed simple interpolation. PSNR for
imagesCNN were on average(sd) 26.1(2.1)
for dataset 1 and 24.4(2.6) for dataset 2, while for all imagesinterp
were 20.5(1.9) and 21.4(2.8), respectively.
Similarly,
structure segmentation was more precise (closer to that of baseline images) in imagesCNN
compared to imagesinterp (Figures 2a and 2b).
Conclusion
This work demonstrates that deep learning methods can successfully
reconstruct 3D HR images from 2D LR ones, potentially advancing research for
paediatric radiotherapy effects. Our model outperforms standard interpolation,
both in perceptual quality and for autocontouring. Further work is needed to
improve the generalisability across imaging sequences and validate it for
additional structural analysis tasks.
[1] https://arxiv.org/abs/1803.01417