Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

AN INDUCTIVE MAPPING WITH CONVOLUTIONAL REPRESENTATIONS FOR HUMAN SETTLEMENT DETECTION: PRELIMINARY RESULTS

Conference ·
OSTI ID:1408008
Undoubtedly, deep convolutional learning methods continue to improve performance in image-level classification for computer vision and remote sensing applications. However, the spatio-temporal nature of remote sensing imagery offers other interesting challenges in multiscale image understanding. Emerging opportunities include seeking fine-grained and neighborhood mapping with overhead imagery. Limitations due to lack of relevant scale ground-truth often mandates that these challenges are pursued as disjoint. We test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate surprising potential on multiscale experiments that incorporate edge-level details up to semantic-level informationWe test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate a preliminary inductive transfer learning potential on multiscale experiments that incorporate edge-level details up to semantic-level informationWe test this premise and explore representation spaces from a single deep convolutional network and their visualization to argue for a novel unified feature extraction framework. The objective is to utilize and re-purpose trained feature extractors without the need for network retraining on three remote sensing tasks i.e. superpixel mapping, pixel-level segmentation and semantic based image visualization. By leveraging the same convolutional feature extractors and viewing them as visual information extractors that encode different image representation spaces, we demonstrate a preliminary inductive transfer learning potential on multiscale experiments that incorporate edge-level details up to semantic-level information.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1408008
Country of Publication:
United States
Language:
English

Similar Records

Optimal Segmentation Strategy for Compact Representation of Hyperspectral Image Cubes
Conference · Mon Feb 07 23:00:00 EST 2000 · OSTI ID:15008098

Machine-Learning-based Algorithms for Automated Image Segmentation Techniques of Transmission X-ray Microscopy (TXM)
Journal Article · Mon May 10 20:00:00 EDT 2021 · JOM. Journal of the Minerals, Metals & Materials Society · OSTI ID:1834598

Deep Neural Network Algorithm for CMC Microstructure Characterization and Variability Quantification
Conference · Mon Sep 19 00:00:00 EDT 2022 · OSTI ID:2394691

Related Subjects