A novel method to predict OAR contour errors without a ground truth using geometric learning
Edward Henderson,
United Kingdom
PD-0317
Abstract
A novel method to predict OAR contour errors without a ground truth using geometric learning
Authors: Edward Henderson1, Andrew Green1, Marcel van Herk1, Eliana Vasquez Osorio1
1The University of Manchester, Division of Cancer Sciences, Manchester, United Kingdom
Show Affiliations
Hide Affiliations
Purpose or Objective
The delineation of organs-at-risk
(OARs) is a crucial step in radiotherapy planning. However, contouring is prone
to variability even when observers follow detailed delineation guidelines or
when contours are generated by an automated tool. We propose a novel method to
automatically detect errors in OAR contours, without a ground truth, by
predicting distances to a consensus guideline across the entire 3D surface of
the OAR contour.
Material and Methods
We trained a custom deep learning
model which takes an OAR contour and planning CT as input, and outputs a 3D prediction
map of distances to a consensus across the entire contour. Our model (fig 1.)
is constructed from 3 parts: a small convolutional neural network (CNN) to
extract information from the CT scan in patches surrounding the input contour;
a graph neural network (GNN) to extract geometric information from the input
contour and a multi-layer perceptron (MLP) to make distance predictions.
To create a dataset with which to
train our model, we start with 34 head and neck planning CTs with OAR contours
of the parotid glands produced by an experienced oncologist and a radiographer1.
The contours are flipped laterally resulting in 68 “general” parotid glands. The
oncologist’s contours are perturbed 100 times by applying structured noise to
yield a training dataset of 6800 perturbed OAR structures.
We train the model to predict the
nearest distances to the gold standard (the oncologist’s original contour)
across the entire surface of the perturbed structures. We perform a five-fold
cross-validation and evaluate the robustness of our method using the unseen
contours of the radiographer. We compare our model’s predictions to the ground
truth distances between the radiographer’s and oncologist’s contours, using a 2-sample
Kolmogorov–Smirnov test to evaluate the similarity in the distribution of
distances for each contour. We report the proportion of OAR contours for which
our model’s predictions had no significant difference to the ground truth
(p>0.1).
Results
Our model’s predictions have no
significant differences for 78% of the radiographer’s OAR contours for distances
greater than the CT resolution (1.44mm) and smaller than the maximum
perturbation generated in the training dataset (5mm). Figure 2a shows violin plots
of the absolute distance distributions for each of the radiographer’s 68 contours.
Figure 2b shows a 3D example of the oncologist’s and radiographer’s contours,
the ground truth distances and our model’s prediction.
Conclusion
Our model can predict errors on
3D parotid gland contours without a ground truth. The distance predictions
produced by our model could be used to highlight regions of a contour which may
require editing to be consistent with consensus guidelines. The presented model
is currently a proof-of-concept and tested in just one OAR, but we anticipate
that this approach can be expanded to multiple OARs and sites.
1. https://arxiv.org/abs/1809.04430