skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Bistatic SAR: Imagery & Image Products.

Abstract

While typical SAR imaging employs a co-located (monostatic) RADAR transmitter and receiver, bistatic SAR imaging separates the transmitter and receiver locations. The transmitter and receiver geometry determines if the scattered signal is back scatter, forward scatter, or side scatter. The monostatic SAR image is backscatter. Therefore, depending on the transmitter/receiver collection geometry, the captured imagery may be quite different that that sensed at the monostatic SAR. This document presents imagery and image products formed from captured signals during the validation stage of the bistatic SAR research. Image quality and image characteristics are discussed first. Then image products such as two-color multi-view (2CMV) and coherent change detection (CCD) are presented.

Authors:
; ;
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA), Office of Defense Nuclear Nonproliferation (NA-20)
OSTI Identifier:
1159447
Report Number(s):
SAND2014-18346
537911
DOE Contract Number:
AC04-94AL85000
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English

Citation Formats

Yocky, David A., Wahl, Daniel E., and Jakowatz, Charles V,. Bistatic SAR: Imagery & Image Products.. United States: N. p., 2014. Web. doi:10.2172/1159447.
Yocky, David A., Wahl, Daniel E., & Jakowatz, Charles V,. Bistatic SAR: Imagery & Image Products.. United States. doi:10.2172/1159447.
Yocky, David A., Wahl, Daniel E., and Jakowatz, Charles V,. 2014. "Bistatic SAR: Imagery & Image Products.". United States. doi:10.2172/1159447. https://www.osti.gov/servlets/purl/1159447.
@article{osti_1159447,
title = {Bistatic SAR: Imagery & Image Products.},
author = {Yocky, David A. and Wahl, Daniel E. and Jakowatz, Charles V,},
abstractNote = {While typical SAR imaging employs a co-located (monostatic) RADAR transmitter and receiver, bistatic SAR imaging separates the transmitter and receiver locations. The transmitter and receiver geometry determines if the scattered signal is back scatter, forward scatter, or side scatter. The monostatic SAR image is backscatter. Therefore, depending on the transmitter/receiver collection geometry, the captured imagery may be quite different that that sensed at the monostatic SAR. This document presents imagery and image products formed from captured signals during the validation stage of the bistatic SAR research. Image quality and image characteristics are discussed first. Then image products such as two-color multi-view (2CMV) and coherent change detection (CCD) are presented.},
doi = {10.2172/1159447},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2014,
month =
}

Technical Report:

Save / Share:
  • This report describes the significant processing steps that were used to take the raw recorded digitized signals from the bistatic synthetic aperture RADAR (SAR) hardware built for the NCNS Bistatic SAR project to a final bistatic SAR image. In general, the process steps herein are applicable to bistatic SAR signals that include the direct-path signal and the reflected signal. The steps include preprocessing steps, data extraction to for a phase history, and finally, image format. Various plots and values will be shown at most steps to illustrate the processing for a bistatic COSMO SkyMed collection gathered on June 10, 2013more » on Kirtland Air Force Base, New Mexico.« less
  • This is the second in a series of PNNL Multispectral Imagery (ST474D) reports on automated co-registration and rectification of multisensor imagery. In the first report, a semi-automated registration procedure was introduced based on methods proposed by Chen and Lee which emphasized registration of same sensor imagery. The Chen and Lee approach is outlined in Figure 1, and is described in detail in the first report. PNNL made several enhancements to the Chen and Lee approach; these modifications are outlined in Figure 2 and are also described in detail in the first report. The PNNL enhancements to the Chen and Leemore » approach introduced in the first phase have been named Multisensor Image Registration Automation (MIRA). These improvements increased computational efficiency and offered additional algorithms for coarse matching of disparate image types. In the MIRA approach, one set of optimum GCP locations are determined based on a Delaunay triangulation technique using an initial set of GCPs provided by the user, rather than repeating this step for each added control point as is proposed by Chen and Lee. The Chen and Lee approach uses an adjacent pixel difference algorithm for coarse matching patches of the reference image with the source image, while the MIRA approach adds other algorithms. Also the MIRA approach checks to determine if the a newly determined GCP fits the existing warping equation.« less
  • The document consists of three parts: 1) This document presents the data formats for the modeling system developed at the UC Davis Computer Graphics Laboratory. These files convey modeling information to the research rendering systems currently being developed at the laboratory. All files contain keyword and numerical data information to describe polygons, patches, spheres, light parameters, textures and camera descriptions. The files are in ascii format for ease of transfer between machines and so that they can be edited in order to /open quotes/tweak/close quotes/ the data. 2) A dissection is given for the Fujimoto algorithm. In 1985, Akira Fujimotomore » first published his algorithm for speeding up the ray tracing operation. This algorithm works on a unique principle that trades off the substantial ray/surface intersection calculations with a search, along the ray, through a set of cuboid cells. The principal speedup in the algorithm is twofold: a routine based upon integer arithmetic is used to quickly identify the cells that lie in the path of the ray, and only those objects that intersect the chosen cells must be tested for intersection against the ray. In this paper, we present a through analysis of Fujimoto's algorithm, specifically concentrating on the integer calculations. 3) A description is given for modeling and image generation techniques for high-resolution 3-dimensional imagery.« less
  • The project is entitled Fusion of Information from Optical, Thermal, Imagery and Geologic/Topographic Products to Detect Underground Detonations. The study established the feasibility of using such data to support the detection and monitoring of underground tests. The second phase of the study brings the analyses into the real world by testing the feasibility against a real underground test, recommends the suite of sensors to be used and the tools to exploit them. The current task involves the selection of an ongoing underground nuclear test, the scheduling of overhead imagery and the analysis of both the collected imagery and collateral data.more » A significant portion of this task is the compilation of a Ground Truth document that provides an historical background of the test and the changes that occurred. This report provides data that can and will be used to support the development of special digital tools that may be employed with multispectral data collected by various civil sensors. Documentation of ground truth before and after the test detonation, provides data on the visible changes that have occurred which should become the focal point for developing analytical tools to record the existence of an underground test.« less
  • This report focuses on the use of all-source overhead remote sensor imagery for monitoring underground nuclear tests and related activities. This documentation includes: (1) the main unclassified body of the report; (2) a separate ground truth Annex; and (3) a separate classified Annex. Autometric's approach was to investigate the exploitation potential of the various sensors, especially the fusion of products from them in combination with each other and other available collateral data. This approach featured empirical analyses of multisensor/multispectral imagery and collateral data collected before, during, and after an actual underground nuclear test (named BEXAR ). Advanced softcopy digital imagemore » processing and hardcopy image interpretation techniques were investigated for the research. These included multispectral (Landsat, SPOT), hyperspectral, and subpixel analyses; stereoscopic and monoscopic information extraction; multisensor fusion processes; end-to-end exploitation workstation concept development; and innovative change detection methodologies. Conclusions and recommendations for further RD and operational uses were provided in: (1) the general areas of sensor capabilities, database management, collection management, and data processing, exploitation, and fusion; and (2) specific multispectral, hyperspectral, subpixel, three- dimensional modeling, and unique unconventional imaging sensor technology areas.« less