Using machine learning to detect hanging errors in canine thoracic radiographs


Journal Title

Journal ISSN

Volume Title



In medical diagnostic imaging, “hanging” refers to the orientation of a radiographic image (X-ray) made available in the viewing software for interpretation by the radiologist. Industry hanging standards exist to ensure images of a body region are always presented the same way to a radiologist. These hanging standards eliminate the need for radiologists to rotate and flip the images which improves efficiency and accuracy in image interpretation through more effective pattern recognition and reduced distractions. Despite these standards, hanging errors persist in radiology, primarily due to human error in the acquisition process. Machine learning is in its infancy in veterinary medicine, and thus far has been applied to various image interpretation tasks in the field such as detection of pulmonary patterns (Boissady E. , Comble, Zhu, Abbott, & Adrien-Maxence, 2021). However, application of machine learning to veterinary diagnostic image quality control tasks such as image hanging has not yet been investigated. This investigation employed multiple image preprocessing steps and architectures both pretrained and not, such as SVM using Eigen-Radiographs derived through PCA, EfficientNetB0, AlexNet and a Minimal 6-Layer Self-built Neural Network, in order to find learners that could correctly classify hanging errors in canine ventrodorsal thoracic radiographs with the goal of ultimate application in the imaging workflow process. Hanging errors were introduced into the images and include horizontal flip and rotation (90, 180 and 270 degrees) of the images, representing all possible errors encountered in clinical practice. Radiographic images were acquired and divided into training (800), test (240), and validation (208) sets. The architectures were tested on their ability to identify flip, rotation, and flip and rotation combined. Nearly all models, including non-neural networks, were able to distinguish rotation alone with 80% or greater accuracy. Flip and flip plus rotate were more challenging. The best performing model was a stacked model that categorized rotation, corrected the image, and then categorized flip to achieve an accuracy of 94% on the test set. This investigation serves as an important pilot study into the applicability of machine learning in veterinary diagnostic imaging workflow and quality control, with numerous additional applications warranting future investigation.



Machine learning, Neural networks, Computer vision, Radiograph, Artificial intelligence, Explainable AI

Graduation Month



Master of Science


Department of Computer Science

Major Professor

Lior Shamir