Sparse coding of pathology slides compared to transfer learning with deep neural networks
- Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
- Rochester Inst. of Technology, Rochester, NY (United States). Chester F. Carlson Center for Imaging Science
Background Histopathology images of tumor biopsies present unique challenges for applying machine learning to the diagnosis and treatment of cancer. The pathology slides are high resolution, often exceeding 1GB, have non-uniform dimensions, and often contain multiple tissue slices of varying sizes surrounded by large empty regions. The locations of abnormal or cancerous cells, which may constitute a small portion of any given tissue sample, are not annotated. Cancer image datasets are also extremely imbalanced, with most slides being associated with relatively common cancers. Since deep representations trained on natural photographs are unlikely to be optimal for classifying pathology slide images, which have different spectral ranges and spatial structure, we here describe an approach for learning features and inferring representations of cancer pathology slides based on sparse coding. Results We show that conventional transfer learning using a state-of-the-art deep learning architecture pre-trained on ImageNet (RESNET) and fine tuned for a binary tumor/no-tumor classification task achieved between 85% and 86% accuracy. However, when all layers up to the last convolutional layer in RESNET are replaced with a single feature map inferred via a sparse coding using a dictionary optimized for sparse reconstruction of unlabeled pathology slides, classification performance improves to over 93%, corresponding to a 54% error reduction. Conclusions We conclude that a feature dictionary optimized for biomedical imagery may in general support better classification performance than does conventional transfer learning using a dictionary pre-trained on natural images.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE Office of Science (SC)
- Grant/Contract Number:
- AC52-06NA25396
- OSTI ID:
- 1626772
- Journal Information:
- BMC Bioinformatics, Vol. 19, Issue S18; ISSN 1471-2105
- Publisher:
- BioMed CentralCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Extracting Crop Spatial Distribution from Gaofen 2 Imagery Using a Convolutional Neural Network
|
journal | July 2019 |
A New CNN-Bayesian Model for Extracting Improved Winter Wheat Spatial Distribution from GF-2 imagery
|
journal | March 2019 |
Similar Records
Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning
Automatic extraction of cancer registry reportable information from free-text pathology reports using multitask convolutional neural networks