skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: R and DE Robotic Sensor Intern Team: Advanced Sensor Suite Prototype

Abstract

The Department of Energy (DOE) nuclear complex includes many hazardous and challenging environments that must be inspected remotely and routinely for structural integrity evaluation and long-term planning purposes. The H-Canyon Air Exhaust (HCAEX) Tunnel is an example of such an environment where a biannual inspection is currently performed. Presently inspections are performed by teleoperated robotic crawlers which provide low-res video feedback to the structural engineers. Advanced sensors: including Lidar and panospheric hi-res cameras have the potential to provide additional valuable data in future inspections. The primary goal of the sensor suite is to provide an internal 3D map of the HCAEX tunnel. Possible sensors were a stereo camera, a 3D Lidar, or a 2D Lidar that would need to be spun around. The issues with the stereo camera were that it does not have a large field-of-view and that its depth information was not nearly as accurate as even the baseline Lidars. The Hokuyo UTM-30LX-EW shown below provides a 270 deg. FOV with high accuracy and resolution while the largest FOV on a 3D Lidar is 40 deg. This was the reason a 2D Lidar was chosen over the 3D Lidar. Even though the 2D Lidar requires additional components andmore » software to generate a 3D map, its FOV advantage is too great given that the HCAEX tunnel is a relatively confined space. Along with the Lidar, a camera sensor, IMU sensor, pan-tit unit, and on board computer needed to be identified. The two panospheric cameras allow for full 360 deg. FOV. The IMU will give data on the state of the robot inside the tunnel. The computer is used to gather and process data on board and to control the pan-tilt system. Mounts were designed and 3D printed to build the prototype of the sensor suite. It was designed to have the Lidar on top With its full FOV free while having the camera below and as close to together as possible while still leaving much of their FOV visible. The design also managed the cables such that they were unlikely to be damaged if in tension and so that they did not intrude much on the cameras vision. The sensor tree developed for the H-Canyon tunnel inspection project produces a large quantity of camera and Lidar data. a postprocessing pipeline was created to facilitate the analysis of these data. The pipeline follows the following routine: 1) Registration - Salient local features in subsequent scans are identified and used to align the scans With one another. This allows production of a single cohesive map of the tunnel. 2) Sensor Fusion - RGB camera data is 'painted' onto the depth data created by the Lidar. 3) Segmentation - Individual surfaces from within the scene are segmented out separately for individual analysis. 4) Degradation Analysis - Automatic mapping of local deviation from the plane in position or surface normal is performed, as is detection of exposed rebar using Lidar return intensity. 2) Obstacle Detection - Lidar data is also used in real time to detect obstacles in the environment. This could give feedback to the operator to successfully drive in the tunnel. 6) Graphic User Interface - a custom GUI was created to facilitate customer use of the system. The system provides real-time feedback from both the cameras and Lidar. The Lidar point-cloud data is also used for obstacle detection. As the Lidar spins, various algorithms are run to analyze the current 'slice' and obstacles in the environment are determined and registered on a cost-map. A Graphical User Interface (GUI) was developed to allow the user to easily command the crawler to set the rotational speed of the Lidar, perform a scan, and register and save the 3D generated cloud after a scan is done. For the good of the crawler, a critical aspect of the GUI design is to allow the user to cancel a scan any time after it is commanded and before it is saved. The sensor trees depth accuracy was evaluated in comparison to a high end commercial FARO Terrestrial Laser Scanner (TLS) with specifications for millimeter accuracy at distances above 10 m. For the purposes of these tests, the depth measurements produced by the TLS were taken as ground truth. Example test wall panels were scanned with the TLS and postprocessed with its proprietary software, and subsequently scanned with the sensor tree system. The two scans were manually aligned and for each point in the output cloud from the sensor tree, the distance to the nearest ground truth point in the FARO cloud was determined. 98.7% of all produced points were within the specified target range of 6 mm, with an average deviation of 1.75 mm. The test walls used have an overall in-wall depth variation of around 40 mm.« less

Authors:
; ; ; ; ;  [1]
  1. Savannah River National Laboratory - SRNL (United States)
Publication Date:
Research Org.:
WM Symposia, Inc., PO Box 27646, 85285-7646 Tempe, AZ (United States)
OSTI Identifier:
23005525
Report Number(s):
INIS-US-21-WM-P46
TRN: US21V1505045859
Resource Type:
Conference
Resource Relation:
Conference: WM2019: 45. Annual Waste Management Conference, Phoenix, AZ (United States), 3-7 Mar 2019; Other Information: Country of input: France; available online at: https://www.xcdsystem.com/wmsym/2019/index.html
Country of Publication:
United States
Language:
English
Subject:
12 MANAGEMENT OF RADIOACTIVE WASTES, AND NON-RADIOACTIVE WASTES FROM NUCLEAR FACILITIES; 42 ENGINEERING; 3D PRINTING; COMPUTER CODES; COMPUTERIZED SIMULATION; GRAPHICAL USER INTERFACE; GROUND TRUTH MEASUREMENTS; LASERS; MAPPING; OPTICAL RADAR; REMOTE SENSING; ROBOTS; SENSORS; SPECIFICATIONS; TUNNELS

Citation Formats

Suarez, Christopher, McMahon, Conor, Losada, Manuel, Benitez, Julian, Wells, William, and Plummer, Jean. R and DE Robotic Sensor Intern Team: Advanced Sensor Suite Prototype. United States: N. p., 2019. Web.
Suarez, Christopher, McMahon, Conor, Losada, Manuel, Benitez, Julian, Wells, William, & Plummer, Jean. R and DE Robotic Sensor Intern Team: Advanced Sensor Suite Prototype. United States.
Suarez, Christopher, McMahon, Conor, Losada, Manuel, Benitez, Julian, Wells, William, and Plummer, Jean. 2019. "R and DE Robotic Sensor Intern Team: Advanced Sensor Suite Prototype". United States.
@article{osti_23005525,
title = {R and DE Robotic Sensor Intern Team: Advanced Sensor Suite Prototype},
author = {Suarez, Christopher and McMahon, Conor and Losada, Manuel and Benitez, Julian and Wells, William and Plummer, Jean},
abstractNote = {The Department of Energy (DOE) nuclear complex includes many hazardous and challenging environments that must be inspected remotely and routinely for structural integrity evaluation and long-term planning purposes. The H-Canyon Air Exhaust (HCAEX) Tunnel is an example of such an environment where a biannual inspection is currently performed. Presently inspections are performed by teleoperated robotic crawlers which provide low-res video feedback to the structural engineers. Advanced sensors: including Lidar and panospheric hi-res cameras have the potential to provide additional valuable data in future inspections. The primary goal of the sensor suite is to provide an internal 3D map of the HCAEX tunnel. Possible sensors were a stereo camera, a 3D Lidar, or a 2D Lidar that would need to be spun around. The issues with the stereo camera were that it does not have a large field-of-view and that its depth information was not nearly as accurate as even the baseline Lidars. The Hokuyo UTM-30LX-EW shown below provides a 270 deg. FOV with high accuracy and resolution while the largest FOV on a 3D Lidar is 40 deg. This was the reason a 2D Lidar was chosen over the 3D Lidar. Even though the 2D Lidar requires additional components and software to generate a 3D map, its FOV advantage is too great given that the HCAEX tunnel is a relatively confined space. Along with the Lidar, a camera sensor, IMU sensor, pan-tit unit, and on board computer needed to be identified. The two panospheric cameras allow for full 360 deg. FOV. The IMU will give data on the state of the robot inside the tunnel. The computer is used to gather and process data on board and to control the pan-tilt system. Mounts were designed and 3D printed to build the prototype of the sensor suite. It was designed to have the Lidar on top With its full FOV free while having the camera below and as close to together as possible while still leaving much of their FOV visible. The design also managed the cables such that they were unlikely to be damaged if in tension and so that they did not intrude much on the cameras vision. The sensor tree developed for the H-Canyon tunnel inspection project produces a large quantity of camera and Lidar data. a postprocessing pipeline was created to facilitate the analysis of these data. The pipeline follows the following routine: 1) Registration - Salient local features in subsequent scans are identified and used to align the scans With one another. This allows production of a single cohesive map of the tunnel. 2) Sensor Fusion - RGB camera data is 'painted' onto the depth data created by the Lidar. 3) Segmentation - Individual surfaces from within the scene are segmented out separately for individual analysis. 4) Degradation Analysis - Automatic mapping of local deviation from the plane in position or surface normal is performed, as is detection of exposed rebar using Lidar return intensity. 2) Obstacle Detection - Lidar data is also used in real time to detect obstacles in the environment. This could give feedback to the operator to successfully drive in the tunnel. 6) Graphic User Interface - a custom GUI was created to facilitate customer use of the system. The system provides real-time feedback from both the cameras and Lidar. The Lidar point-cloud data is also used for obstacle detection. As the Lidar spins, various algorithms are run to analyze the current 'slice' and obstacles in the environment are determined and registered on a cost-map. A Graphical User Interface (GUI) was developed to allow the user to easily command the crawler to set the rotational speed of the Lidar, perform a scan, and register and save the 3D generated cloud after a scan is done. For the good of the crawler, a critical aspect of the GUI design is to allow the user to cancel a scan any time after it is commanded and before it is saved. The sensor trees depth accuracy was evaluated in comparison to a high end commercial FARO Terrestrial Laser Scanner (TLS) with specifications for millimeter accuracy at distances above 10 m. For the purposes of these tests, the depth measurements produced by the TLS were taken as ground truth. Example test wall panels were scanned with the TLS and postprocessed with its proprietary software, and subsequently scanned with the sensor tree system. The two scans were manually aligned and for each point in the output cloud from the sensor tree, the distance to the nearest ground truth point in the FARO cloud was determined. 98.7% of all produced points were within the specified target range of 6 mm, with an average deviation of 1.75 mm. The test walls used have an overall in-wall depth variation of around 40 mm.},
doi = {},
url = {https://www.osti.gov/biblio/23005525}, journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {7}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: