Towards AI-based motion modelling
Chiara Paganelli,
Italy
SP-0539
Abstract
Towards AI-based motion modelling
Authors: Chiara Paganelli1
1Politecnico di Milano, Dipartimento di Elettronica, Informazione e Bioignegneria, Milano, Italy
Show Affiliations
Hide Affiliations
Abstract Text
In external beam radiotherapy, organ motion detection and compensation is a crucial issue, in order to grant accuracy and effectiveness of the treatment of mobile tumors. This is even more important in the perspective of a geometric and dosimetric treatment adaptation, when tumor tracking techniques are adopted or when dealing with unconventional treatment modalities such as particle therapy.
Over the last few decades, the use of volumetric imaging modalities to aid in target definition has become essential and respiratory-correlated (4D) imaging modalities are applied on a routine basis to quantify uncertainties due to organ motion, with 4D computed tomography (4DCT) being the current clinical standard. However, 4DCT cannot be considered representative of each breathing cycle (intra-fraction variability) at every therapy fraction (inter-fraction variability), thus limiting a time-resolved volumetric (3D) description of respiratory motion, especially in case of irregularly breathing patients. The recent integration of radiation-free Magnetic Resonance Imaging (MRI) systems with radiotherapy treatment units is put forward to overcome this issue. These systems provide time-resolved 2D imaging as state-of-the-art imaging for motion compensation, although limitation in the temporal-spatial trade-off prevents the acquisition of time-resolved 3D data.
Efforts have been made to implement motion models able to predict motion states not depicted directly in 4D imaging, as a way to support on-line tumor tracking or off-line treatment verification and adaptation. Local motion models correlating the internal anatomy with external respiratory surrogates are already available in the clinics for tumor tracking in X-ray radiotherapy (e.g. Vero and Cyberknife), although they just provide local information of the tumor motion without accounting for surrounding healthy tissues.
In this regard, global motion models are designed to predict the entire 3D anatomo-pathological configuration at different respiratory phases by relating the deformation vector field (through deformable image registration) of respiratory phases depicted in a 4D dataset with a real-time surrogate of the motion pattern. Similarly, dose variations models have been also proposed in the literature for applications in particle therapy to estimate dose alterations during un-imaged respiratory states. All these approaches however present several limitations which prevent a clinical implementation. Image-based global motion models mainly depends on the accuracy of deformable image registration and have limited capabilities in compensating for intra-treatment conditions that are significantly different from the planning 4DCT, whereas dose variations models do not include tissue deformations. Above all the limitations, these respiratory motion modelling techniques are still far from a real-time implementation.
Artificial intelligence (AI) can be therefore exploited to make the estimation of time-resolved 3D volume fast, as AI-based techniques are capable of learning by experience and are able to cope with changing circumstances in real-time. It should be noticed that AI-based predictive models are not novel in the treatment of moving organs in external beam radiotherapy, as local motion models implemented for tumor tracking often relies on machine learning models. Nevertheless, recent studies in the literature, especially for MRI-guided radiotherapy, highlighted that AI-based solutions could also play a role in estimating the 3D anatomy in real-time, making them attractive for tumor tracking, treatment adaptation and dose accumulation. But as for conventional motion modelling techniques, the lack of the 3D ground truth data to evaluate their predictive capability of geometric and dosimetric deviations as a function of motion is still a limitation. Digital phantoms can be useful for validation purposes, although their translation into clinical cases is not straightforward.
This talk aims at providing insights on the limitations of conventional respiratory motion modelling techniques and the potential of AI-based motion models for the treatment of mobile lesions with X-ray and particle radiotherapy, bringing about a focus on the problem of their validation.