Evaluation of deep learning based fiducial markers segmentation in pancreatic cancer patients
Abdella M. Ahmed,
Australia
PO-1698
Abstract
Evaluation of deep learning based fiducial markers segmentation in pancreatic cancer patients
Authors: Abdella Mohammednur Ahmed1, Adam Mylonas2, Maegan Gargett1, Danielle Chrystall1, Adam Briggs1, Doan Trang Nguyen2, Paul Keall2, Andrew Kneebone1, George Hruby1, Jeremy Booth1
1Royal North Shore Hospital, Northern Sydney Cancer Centre, Sydney, Australia; 2The University of Sydney, ACRF Image X Institute, Faculty of Medicine and Health, Sydney, Australia
Show Affiliations
Hide Affiliations
Purpose or Objective
Accurate fiducial marker segmentation is essential for kV-guided
intra-fraction motion management to enable stereotactic ablative radiotherapy
of the pancreas. We developed a compact convolutional neural network (CNN)
model with 4 layers for marker segmentation in the prostate with excellent sensitivity
(99.0%) and specificity (98.9%). Deep learning techniques don’t require
additional learning imaging, prior marker properties (such as shape or
orientation) and they are applicable to kV images. In this study, we further
develop our CNN model for marker tracking applied to pancreatic cancer patient
data.
Material and Methods
We evaluated a CNN with 6 layers and a transfer learning approach from
pretrained compact CNN. Training data from the ethics approved SPAN-C Trial for
pancreas SABR was utilised. The training dataset contained both cone beam
computed tomography (CBCT) projections and kV triggered images acquired during
treatment (a total of 23 fractions of 7 patients) for pancreas patients with
implanted fiducial markers. Data augmentation was also performed for subimages
which contained markers. The total dataset had 1.3 million subimages. The CNN
with 6 layers was trained on the full dataset and the transfer learning approach
was trained with 32.3% of the full dataset. Cross validation based early
stopping was employed to avoid overfitting for both. The performance of each
model was tested on unseen CBCT and kV images from 5 fractions of 2 patients.
The sensitivity, specificity, the Receiver Operating Characteristic (ROC) curve
and the Area Under the ROC (AUC) plot were evaluated. The root-mean-square
error (RMSE) was calculated for the centroid of the markers predicted by the
CNN models, relative to the manually segmented ground truth.
Results
The sensitivity and specificity of the fully trained CNN was 98.4% and 99.0%,
respectively, while the transfer learning model had 94.3% and 99.3%,
respectively. The AUC of the fully trained model and that of the transfer
learning model was 0.9887 and 0.9889, respectively. The mean RMSE of the fully
trained CNN was 0.20 ± 0.03 mm and 0.35 ± 0.05 mm in x and y directions (of kV
image), respectively, while the transfer learning had 0.15 ± 0.02 mm and 0.35 ±
0.04 mm in x and y directions, respectively.
Conclusion
A deep learning approach was implemented to classify implanted fiducial
markers in pancreatic cancer patient data. The accuracy of marker position
prediction by the CNN models from the ground truth was submillimeter as
required for stereotactic ablative radiotherapy of the pancreas.