Techniques to optimize auto-segmentation of small OARs in pediatric patients undergoing CSI
PD-0334
Abstract
Techniques to optimize auto-segmentation of small OARs in pediatric patients undergoing CSI
Authors: James Tsui1, Marija Popovic2, Ozgur Ates3, Chia-Ho Hua3, James Schneider4, Sonia Skamene1, Carolyn Freeman1, Shirin Enger2
1McGill University Health Centre, Radiation Oncology, Montreal, Canada; 2McGill University, Medical Physics Unit, Montreal, Canada; 3St. Jude Children’s Research Hospital, Radiation Oncology, Memphis, USA; 4Jewish General Hospital, Radiation Oncology, Montreal, Canada
Show Affiliations
Hide Affiliations
Purpose or Objective
Organs at risk (OAR) auto-segmentation can decrease inter-observer variability and help with the quality assurance of contouring. The development and training of Deep Learning (DL) algorithms are highly complex, particularly in pediatric cases requiring Cranial Spinal Irradiation (CSI) that involves multiple OARs exhibiting significant differences in size and in Hounsfield Unit (HU). The DL model nnUNet (Isensee et al. 2021) can obviate many difficulties associated with preprocessing, choice of network architecture, and model training due to its self-configuring capability. It is relatively easy to implement but requires extensive computing power and lengthy training time. Its performance may also be affected in situations where large high-contrast and small (relative to background) low-contrast structures such as the lungs and optic chiasm are segmented in the same task and on the same CT scan. We hypothesize that performance can be improved by the following: 1) breaking the task into subtasks that contour structures of similar size, location, and contrast level; 2) implementing different HU windowing schemes for different subtasks; 3) implementing a loss function that better accounts for class imbalance. We focused on optic structures due to the relatively poorer performance compared to other structures cited in the literature.
Material and Methods
We collected the planning CT scans of pediatric patients undergoing CSI and reviewed all the contours. Of 36 total patients, 29 were used for training and 7 for validation. We cropped the images to exclude structures outside the body (mask, couch, etc.) and kept only the image slices containing optic structures. We first implemented the 2D nnUNet framework to auto-segment 7 structures: eyes, lenses, optic nerves, and optic chiasm. We then compared the 2D nnUNet results with a basic 2D UNet that incorporates two changes: 1) preprocessing the images by clipping the HU units within the range of the target structures; 2) implementing a Unified Focal Loss (UFL; Yeung et al. 2021) to account for class imbalance. We trained the models and inferred the output labels on the validation dataset. We then computed the Dice similarity coefficient (DICE) between the predicted labels and ground truths and compared performance between the two models.
Results
The following mean (std) DICE scores were obtained for the two models.
| nnUNet
| windowing + UNet + UFL
|
Eye_L | 0.47 (±0.41)
| 0.85 (±0.15)
|
Eye_R
| 0.49 (±0.40)
| 0.85 (±0.13)
|
Lens_L | 0.42 (±0.40)
| 0.77 (±0.24)
|
Lens_R | 0.42 (±0.40)
| 0.80 (±0.18)
|
Optic_Nerve_L
| 0.35 (±0.37)
| 0.68 (±0.19)
|
Optic_Nerve_R
| 0.34 (±0.38)
| 0.65 (±0.24)
|
Chiasm
| 0.27 (±0.30)
| 0.49 (±0.24)
|
Conclusion
Adjusting the contrast window and using UFL as a loss function drastically improve the segmentation performance. Future work includes extending the nnUNet to incorporate these two important changes to auto-segment all the OARs of pediatric patients undergoing CSI.