Poster Presentation 29th Australian and New Zealand Bone and Mineral Society Annual Scientific Meeting 2019

Please Don’t Move - feasibility of automatically evaluating motion artefact from peripheral Quantitative Computed Tomography scans using a deep learning approach (#132)

Paola Chivers 1 2 3 , Sajb Saha 4 , Timo Ranatainen 1 2 3 5 , Yogi Kanagasingam 4 , Aris Siafarikas 1 2 3 6 7 8 , Fleur McIntyre 2 9 , Beth Hands 1 2 , Nicolas Hart 1 2 3
  1. Institute for Health Research, The University of Notre Dame Australia, Fremantle, WESTERN AUSTRALIA, Australia
  2. Western Australian Bone Research Collaboration, Perth, Western Australia, Australia
  3. Exercise Medicine Research Institute , Edith Cowan University, Joondalup, Western Australia, Australia
  4. Australian e-Health Research Centre , Health and Bio-Security Unit CSIRO, Floreat, Western Australia, Australia
  5. Gerontology Research Centre, University of Jyvaskyla, Jyvaskyla, , Finland
  6. Department of Endocrinology and Diabetes, Perth Children's Hospital, Perth, Western Australia, Australia
  7. Telethon Kids Institute for Child Health Research, Perth, Western Australia, Australia
  8. Medical School, Division of Paediatrics, University of Western Australia, Nedlands, Western Australia, Australia
  9. School of Health Science, The University of Notre Dame Australia, Fremantle, Western Australia, Australia

Objectives. Quality of peripheral Quantitative Computed Tomography (pQCT) scans, particularly to eliminate scans with motion artefact, predominantly relies on human-operated classification.  Previously we have shown a textural analysis-based classifier to provide a moderate-to-good classification of motion artefact of pQCT, however this approach was not considered adequate to apply as an automatic classification of pQCT scan quality1

Methods. A total of 280 pQCT scans from the long bones of the tibia and radius at 4% and 66% sites, of an adolescent cohort with movement difficulties were rated by a human expert as ‘Accept’ image quality (n=212), or ‘Reject’ (n=68) as per Blew et al.2 classification.  To avoid bias in training the deep learning models we used similar number of images for both rating categories, and increased the number of ‘Reject’ images to 204 by rotating them 90 and 180 degrees respectively. From a clinical perspective, it means seeing the same cross-section, however, from different viewing angles.

Consistent with other studies in the deep learning domain, data augmentation was conducted to both categories, specifically, rotation (in the range of -5 to 5 degrees), shearing (in the range 0.2), scaling (in the range 0.2) and flipping, to further increase the number of images. The training (90% scans) and test (10%) were randomly selected prior to performing any data augmentation to avoid data impurity.  Three deep learning3 models were tested.

Results. All deep learning models reported 100% sensitivity, however only the ResNet50 model reported 100% specificity and accuracy. Inception-v3 reported 94.8% and 97.2%, while InceptionResNet-v2 reported 96.4% and 98.7% specificity and accuracy respectively.

Conclusion.  This feasibility study evaluated the ability of three state-of-the-art deep learning models to detect motion artefact in pQCT scans. Deep learning approach appears to be a robust method to automatically and accurately detect motion artefact in reconstructed pQCT images.

  1. 1. Rantalainen, T., Chivers, P., Beck, B., Robertson, S., Hart, N., Nimphius, S., Weeks, B., McIntyre, F., Hands, B., Siafarikas, A. (2017). Please Don’t Move - Evaluating Motion Artefact from pQCT Scans Using Textural Features. Journal of Clinical Densitometry. Vol21 (2) 260-268 DOI: 10.1016/j.jocd.2017.07.002.
  2. 2. Blew RM, Lee VR, Farr JN, et al. (2014). Standardizing evaluation of pQCT image quality in the presence of subject movement: qualitative versus quantitative assessment. Calcified Tissue International 94:202–211.
  3. 3. Saha S, Nassisi M, Wang M, Lindenberg S, Kanagasingam Y, Sadda S, Hu ZJ. Automated detection and classification of early AMD biomarkers using deep learning. Nature-Scientific Reports (ahead of print).