CT2S, ARF10, BoneStrenght

Application description

The Computed Tomography to Strength (CT2S) workflow is a digital twin solution, used for the prediction of the risk of hip fracture for an individual based on CT scans. CT2S is a web-based service, potential users can find all the information about CT2S on the website and users can approach the application owner through the website by making a request. The user will then securely transfer anonymised CT scans to the application owners. The application owners will segment the bones from CT images, generate 3D finite element mesh, assigned local material properties based on CT attenuation, and then perform a set of finite element (FE) analysis to estimate the bone strength (ANSYS) and calculate the risk of hip fracture. The final results will be compacted into a PDF file and sent back to the designated user email who requested the service [1]. The workflow is installed and usable on Sheffield’s HPC system ShARC. The workflow has also been tested on ARCHER (EPCC).

The target users for CT2S are primarily clinicians and researchers. The clinical application of this code is to provide a more accurate intervention strategy for elderly people with weaker bones, such as those who are clinically defined as osteopenic but not receiving any treatments. The workflow will provide a complete assessment of bone strength in 3D and analyse the risk factor of that particular individual in sustaining a fall in future. This information can be used in addition to clinical history and Dual-energy X-ray Absorptiometry (DXA) evaluations in order to determine the necessary intervention strategy. For researchers, CT2S can be applied to a wide range of problems concerning the mechanical response of long bones, including the prediction of fracture risk at different locations and for different age groups. Using this approach, the code is not directly distributed and the HPC elements are hidden from the clinicians to allow them easy access to this technology without the need to have a good understanding of HPC systems.

USFD has also developed a multiscale model for current absolute risk of hip fracture (ARF0) [3]. In addition to femur strength (obtained from CT2S pipeline) this model included information on body height, weight and fall rate. These were used to predict severity and frequency of falling. Recent work in our group further included in this model subject-specific information on three-dimensional trochanteric soft-tissue geometry from proximal femur CT images. These improved hip fracture classification accuracy to 87% in the same 100 subject postmenopausal osteoporotic cohort mentioned above. We also modified the ARF0 model to predict 10-year risk of hip fracture (ARF10) by coupling it to a model simulating spatiotemporal loss of bone density driven by hormonal activity.

Based on ARF10 workflow developed by USFD, UNIBO has replicated the biomechanical computed tomography pipeline, and is developing an in silico trial solution, namely BoneStrength. The code is running on Cartesius (SURF, The Netherlands), Galileo, and Galileo100 (CINECA, Italy). By using a statistical anatomy atlas trained with the Sheffield cohort [, over 1000 synthetic patient proximal femur FE models were generated from 94 real patient data. The ARF0 pipeline was applied to the synthetic patients, showing a good persistence of the original cohort biomechanical characteristics in the synthetic cohort. UNIBO is now working on bone ageing (physiology/disease) and drug effect (treatment) models in order to run a massive simulation of placebo vs drug phase III clinical trial, involving 1000 patients over 5-10 years.
The natural users of the solution will be pharmaceutical companies, which will be able to tune the cohort inclusion criteria to maximise drug effects, or even reach statistical significance with less actually enrolled patients in a clinical trial.

 

Technical specifications

The modelling elements of the CT2S workflow focus on providing an estimation of the strength of an individual’s femur under a series of loading conditions, using a subject-specific FE model. This step involves the use of ITK-Snap for image segmentation to isolate the bone surface, ICEM CFD (ANSYS Inc, PA, USA) for meshing the segmented bone geometry using tetrahedral elements, Bonemat for personalising the bone material properties, and ANSYS (ANSYS Inc, PA, USA), running on ShARC, for performing the simulations. A total of 28 load simulations are performed (as replica computing) to comprehensively investigate the bone response under several fall configurations. Each load simulation requires 96 GB of memory shared across three cores in an OpenMP session. USFD has the capability of running the entire batch of 28 simulations simultaneously. Results are post processed in MATLAB (MathWorks, Natick, MA) with custom written scripts [1].

The ARF0 pipeline requires additional subject-specific information, specifically body weight and body height. The pipeline is implemented as a MATLAB function which takes these scalar values as input, in addition to a matrix of values of bone strength in dependence of the 28 impact orientations mentioned above. The MATLAB function gives the value of ARF0 as output. It performs a Monte-Carlo integration to estimate the risk of fracture over multiple fall scenarios. A sensitivity analysis has shown that 10 000 fall simulations are sufficient to estimate ARF0 with a reasonable residual uncertainty [2]. The fall scenarios can be analysed in parallel and the ARF0 code has been tested on a 20-core compute node on ShARC HPC using the MATLAB default MPICH2 library. Each ARF0 job has negligible memory and CPU requirements in addition to those of one CT2S job.

For the ARF10 pipeline, the CT2S pipeline needs to be repeatedly executed 10 times, each time changing only the bone material properties encoded in the FE model. The algorithm to modify the FE model is implemented as a sequence of executions of an ANSYS macro (to write out current material properties into a text file), a MATLAB script (to read the text file, modify the property values and write these to a text file) and an ANSYS macro (to read the modified text file). The output of each of the 10 FE models is passed through ARF0 pipeline, and the result is aggregated to obtain ARF10. A full analysis of resource usage of the ARF10 pipeline is underway, but the following must be noted; the analysis of each of the 10 FE models is identical to that of one CT2S job. However, these 10 CT2S jobs can be performed independent of each other. The rest of the ARF10 pipeline comprises (a) generating the 10 FE models, (b) executing ARF0 on the output of each CT2S analysis, and (c) aggregating the ARF0 results to obtain ARF10. These steps are only partially parallelisable, however their resource requirement relative to the 10 CT2S jobs is negligible. Hence, the ARF10 pipeline is highly parallelisable as a whole.

 

HPC usage and parallel performance

The main motivation of HPC usage by CT2S and BoneStrength pipeline is the need for a rapid and cost-effective response to the clinical question. In particular, by leveraging HPC parallelism CT2S FE simulation can run simultaneously, and the physician can decide about the patient therapy within an hour. Conversely, BoneStrength simulations will require around 100 years if run serially, and HPC use can keep the user waiting time under two weeks.

While the single fall simulation scalability relies on commercial software performance (with 8 cores using MPI parallelism we reached a 6x speedup), the full pipeline shows linear scaling, being a replica computing pattern simulation. The largest BoneStrength production run involved 300 nodes on Cartesius (7200 cores) for a week.

Each single fall simulation by ANSYS default minimizes RAM consumption (approximately 5 GB) but has an important file I/O requirement (50-80 GB); while this approach is useful on workstations, or for single patient or small cohorts, for large cohorts (> 100 simultaneous patients) it is preferable to store all the temporary information in RAM and minimize file I/O to avoid bottlenecks. Indeed, ANSYS allows this approach, so the final requirements become 15 GB of RAM and 2-4 GB of file I/O. Each single-patient 1-year simulation (28 fallings) produces around 135 GB of final data, from which 20 MB are extracted for further postprocessing. These data can be stored for quality checks or deleted to limit space consumption.

In CompBioMed2 the CT2S/BoneStrength workflow has been automated and optimized, by reducing file I/O and switching to the more efficient ANSYS MPI parallelism. Next developments include the use of a dedicated job manager (we are exploring QCG-PilotJob, deployed at PSNC) and the possible porting of the workflow to the highly parallel Alya solver developed at BSC. We also plan to collaborate with ATOS to investigate simulation scaling and performances on various traditional and experimental architectures.

 

Cited references:

[1] I. Benemerito, W. Griffiths, J. Allsopp, W. Furnass, P. Bhattacharya, X. Li, A. Marzo, S. Wood, M. Viceconti, A. Narracott, Computer Methods and Programs in Biomedicine, 208, 106200  (2021).

[2] Bhattacharya, P., Altai, Z., Qasim, M. et al. Biomech Model Mechanobiol 18301–318 (2019).

[3] Mark Taylor, Marco Viceconti, Pinaki Bhattacharya, Xinshan Li, Journal of the Mechanical Behavior of Biomedical Materials, 118, 104434 (2021).

 

Related articles

  • Altai Z et. al. 2019, The effect of boundary and loading conditions on patient classification using finite element predicted risk of fracture. DOI
  • Qasim M et al. 2016, Patient-specific finite element estimated femur strength as a predictor of the risk of hip fracture: the effect of methodological determinants. DOI
For more information about the applications supported in CompBioMed, you can contact us at "software at compbiomed.eu".
buy valtrex valtrex online