Objectives. Quality of peripheral Quantitative Computed Tomography (pQCT) scans, particularly to eliminate scans with motion artefact, predominantly relies on human-operated classification. Previously we have shown a textural analysis-based classifier to provide a moderate-to-good classification of motion artefact of pQCT, however this approach was not considered adequate to apply as an automatic classification of pQCT scan quality1.
Methods. A total of 280 pQCT scans from the long bones of the tibia and radius at 4% and 66% sites, of an adolescent cohort with movement difficulties were rated by a human expert as ‘Accept’ image quality (n=212), or ‘Reject’ (n=68) as per Blew et al.2 classification. To avoid bias in training the deep learning models we used similar number of images for both rating categories, and increased the number of ‘Reject’ images to 204 by rotating them 90 and 180 degrees respectively. From a clinical perspective, it means seeing the same cross-section, however, from different viewing angles.
Consistent with other studies in the deep learning domain, data augmentation was conducted to both categories, specifically, rotation (in the range of -5 to 5 degrees), shearing (in the range 0.2), scaling (in the range 0.2) and flipping, to further increase the number of images. The training (90% scans) and test (10%) were randomly selected prior to performing any data augmentation to avoid data impurity. Three deep learning3 models were tested.
Results. All deep learning models reported 100% sensitivity, however only the ResNet50 model reported 100% specificity and accuracy. Inception-v3 reported 94.8% and 97.2%, while InceptionResNet-v2 reported 96.4% and 98.7% specificity and accuracy respectively.
Conclusion. This feasibility study evaluated the ability of three state-of-the-art deep learning models to detect motion artefact in pQCT scans. Deep learning approach appears to be a robust method to automatically and accurately detect motion artefact in reconstructed pQCT images.