DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

Abstract

In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from idealmore » data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.« less

Authors:
 [1];  [1];  [1];  [1];  [1];  [1]
  1. Univ. of Florida, Gainesville, FL (United States)
Publication Date:
Research Org.:
Univ. of Michigan, Ann Arbor, MI (United States)
Sponsoring Org.:
USDOE NA Office of Nonproliferation and Verification Research and Development (NA-22); US Department of Homeland Security (DHS); USDOE National Nuclear Security Administration (NNSA), Office of Defense Nuclear Nonproliferation
OSTI Identifier:
1454830
Alternate Identifier(s):
OSTI ID: 1487096
Grant/Contract Number:  
NA0002534; 2014-DN- 077-ARI083-01
Resource Type:
Accepted Manuscript
Journal Name:
Nuclear Instruments and Methods in Physics Research. Section A, Accelerators, Spectrometers, Detectors and Associated Equipment
Additional Journal Information:
Journal Volume: 890; Journal Issue: C; Journal ID: ISSN 0168-9002
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
46 INSTRUMENTATION RELATED TO NUCLEAR SCIENCE AND TECHNOLOGY; 97 MATHEMATICS AND COMPUTING; Data fusion; Source tracking; Liquid scintillator; Algorithms

Citation Formats

Stadnikia, Kelsey, Henderson, Kristofer, Martin, Allan, Riley, Phillip, Koppal, Sanjeev, and Enqvist, Andreas. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance. United States: N. p., 2018. Web. doi:10.1016/j.nima.2018.01.102.
Stadnikia, Kelsey, Henderson, Kristofer, Martin, Allan, Riley, Phillip, Koppal, Sanjeev, & Enqvist, Andreas. Data fusion for a vision-aided radiological detection system: Calibration algorithm performance. United States. https://doi.org/10.1016/j.nima.2018.01.102
Stadnikia, Kelsey, Henderson, Kristofer, Martin, Allan, Riley, Phillip, Koppal, Sanjeev, and Enqvist, Andreas. Mon . "Data fusion for a vision-aided radiological detection system: Calibration algorithm performance". United States. https://doi.org/10.1016/j.nima.2018.01.102. https://www.osti.gov/servlets/purl/1454830.
@article{osti_1454830,
title = {Data fusion for a vision-aided radiological detection system: Calibration algorithm performance},
author = {Stadnikia, Kelsey and Henderson, Kristofer and Martin, Allan and Riley, Phillip and Koppal, Sanjeev and Enqvist, Andreas},
abstractNote = {In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.},
doi = {10.1016/j.nima.2018.01.102},
journal = {Nuclear Instruments and Methods in Physics Research. Section A, Accelerators, Spectrometers, Detectors and Associated Equipment},
number = C,
volume = 890,
place = {United States},
year = {Mon Feb 12 00:00:00 EST 2018},
month = {Mon Feb 12 00:00:00 EST 2018}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 4 works
Citation information provided by
Web of Science

Figures / Tables:

Figure 1 Figure 1: Tracking scenarios and the principle of the overlapping distance/time domain between the radiological and visual sensor. The scales will be arbitrary but the data trends will be highly correlated.

Save / Share: