Autocontouring of the mouse thorax using deep learning
Justin Malimban,
The Netherlands
PD-0066
Abstract
Autocontouring of the mouse thorax using deep learning
Authors: Justin Malimban1, Danny Lathouwers2, Haibin Qian3, Frank Verhaegen4, Julia Wiedemann1, Sytze Brandenburg1, Marius Staring5
1University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands; 2Delft University of Technology, Department of Radiation Science and Technology, Delft, The Netherlands; 3Amsterdam University Medical Centers (location AMC) and Cancer Center Amsterdam, Department of Medical Biology, Amsterdam, The Netherlands; 4Maastricht University Medical Center, Department of Radiation Oncology (MAASTRO), Maastricht, The Netherlands; 5Leiden University Medical Center, Department of Radiology, Leiden, The Netherlands
Show Affiliations
Hide Affiliations
Purpose or Objective
Image-guided small animal irradiations are typically performed in a single session, requiring continuous administration of anesthesia. Prolonged exposure to anesthesia may affect experimental outcomes and thus, a fast preclinical irradiation workflow is desired. Similar to the clinic, delineation of organs remains one of the most time-consuming and labor-intensive stages in the preclinical irradiation planning workflow, and this is amplified by the fact that a large number of animals is needed for a single study. In this work, we evaluate to which extent deep learning pipelines can contribute to speeding up such a workflow for thorax irradiations while retaining contouring quality.
Material and Methods
We trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) and AIMOS (i.e., current best performing algorithm for mouse segmentation) deep learning pipelines on 105 native micro-CT scans of mice, and we tested the trained models against an independent dataset (n=35, native CTs not included in training). Additionally, we also evaluated the segmentation performance on an external dataset (n=35, contrast-enhanced CTs), which do not have the same properties such as the mouse strain and image acquisition parameters as the training data. The quality of the automated contours was evaluated in terms of the mean surface distance (MSD) and 95% Hausdorff distance (95% HD). We also report the average preprocessing and inference times and the total runtime of each model.
Results
For the native CT dataset, all models of nnU-Net (3d_fullres, 3d_lowres, 3d_cascade, 2d) and AIMOS generate accurate contours of the heart, spinal cord, left and right lungs as shown in figure 1(a). They achieve an average MSD less than the in-plane voxel size of 0.14 mm while the average 95% HD is below 0.60 mm for all target organs except for the right lung segmentation of nnU-Net 2d. For the contrast-enhanced CTs, we chose to compare only the best performing 3D model of nnU-Net (3d_fullres) to the 2D models (nnU-Net 2d and AIMOS). Consistent for all organs, the nnU-Net 3d_fullres model show superior segmentation performance (figure 2(a)). The 2D models generate incomplete contours and exhibit unacceptably large Hausdorff distances (> 1 mm). Although the 2D models are generally faster, all models take < 1 minute to generate contours, which is a significant improvement from the manual contouring time of ~40 minutes.
Conclusion
We have shown that the nnU-Net 3d_fullres model outperforms the state-of-the-art AIMOS deep learning pipeline for mouse thoracic segmentation, and it offers a 98% reduction in contouring time compared to manual contouring. Our findings demonstrate the potential of integrating nnU-Net in routine irradiation planning practice to expedite irradiation, reduce the workload and deliver high quality irradiations.