Effect of image resolution on automated classification of chest X-rays
Journal Article
·
· Journal of Medical Imaging
- Univ. of Tennessee, Knoxville, TN (United States). Bredesen Center for Interdisciplinary Research and Graduate Education
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- VA Connecticut Healthcare, West Haven, CT (United States); Yale Univ., New Haven, CT (United States)
- Univ. of Tennessee, Knoxville, TN (United States). Bredesen Center for Interdisciplinary Research and Graduate Education; Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Deep learning (DL) models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays (CXRs). Recently available public CXR datasets include high resolution images, but state-of-the-art models are trained on reduced size images due to limitations on graphics processing unit memory and training time. As computing hardware continues to advance, it has become feasible to train deep convolutional neural networks on high-resolution images without sacrificing detail by downscaling. This study examines the effect of increased resolution on CXR classification performance. We used the publicly available MIMIC-CXR-JPG dataset, comprising 377,110 high resolution CXR images for this study. We applied image downscaling from native resolution to 2048 × 2048 pixels, 1024 × 1024 pixels, 512 × 512 pixels, and 256 × 256 pixels and then we used the DenseNet121 and EfficientNet-B4 DL models to evaluate clinical task performance using these four downscaled image resolutions. We find that while some clinical findings are more reliably labeled using high resolutions, many other findings are actually labeled better using downscaled inputs. We qualitatively verify that tasks requiring a large receptive field are better suited to downscaled low resolution input images, by inspecting effective receptive fields and class activation maps of trained models. Lastly, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, indicating that diverse information is extracted across resolutions.
- Research Organization:
- Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC)
- Grant/Contract Number:
- AC05-00OR22725
- OSTI ID:
- 1996738
- Journal Information:
- Journal of Medical Imaging, Journal Name: Journal of Medical Imaging Journal Issue: 4 Vol. 10; ISSN 2329-4302
- Publisher:
- SPIECopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
Domain Shift Analysis in Chest Radiographs Classification in a Veterans Healthcare Administration Population
Journal Article
·
Thu Apr 10 20:00:00 EDT 2025
· Journal of Imaging Informatics in Medicine
·
OSTI ID:2573574