skip to main content

Search for: All records

Creators/Authors contains: "Carlson, Jeffrey J."
  1. We consider the class of integrated network design and scheduling problems. These problems focus on selecting and scheduling operations that will change the characteristics of a network, while being speci cally concerned with the performance of the network over time. Motivating applications of INDS problems include infrastructure restoration after extreme events and building humanitarian distribution supply chains. While similar models have been proposed, no one has performed an extensive review of INDS problems from their complexity, network and scheduling characteristics, information, and solution methods. We examine INDS problems under a parallel identical machine scheduling environment where the performance of themore » network is evaluated by solving classic network optimization problems. We classify that all considered INDS problems as NP-Hard and propose a novel heuristic dispatching rule algorithm that selects and schedules sets of arcs based on their interactions in the network. We present computational analysis based on realistic data sets representing the infrastructures of coastal New Hanover County, North Carolina, lower Manhattan, New York, and a realistic arti cial community CLARC County. These tests demonstrate the importance of a dispatching rule to arrive at near-optimal solutions during real-time decision making activities. We extend INDS problems to incorporate release dates which represent the earliest an operation can be performed and exible release dates through the introduction of specialized machine(s) that can perform work to move the release date earlier in time. An online optimization setting is explored where the release date of a component is not known.« less
  2. This report addresses the development of automated video-screening technology to assist security forces in protecting our homeland against terrorist threats. A prevailing threat is the covert placement of bombs inside crowded public facilities. Although video-surveillance systems are increasingly common, current systems cannot detect the placement of bombs. It is also unlikely that security personnel could detect a bomb or its placement by observing video from surveillance cameras. The problems lie in the large number of cameras required to monitor large areas, the limited number of security personnel employed to protect these areas, and the intense diligence required to effectively screenmore » live video from even a single camera. Different from existing video-detection systems designed to operate in nearly static environments, we are developing technology to detect changes in the background of dynamic environments: environments where motion and human activities are persistent over long periods. Our goal is to quickly detect background changes, even if the background is visible to the camera less than 5 percent of the time and possibly never free from foreground activity. Our approach employs statistical scene models based on mixture densities. We hypothesized that the background component of the mixture has a small variance compared to foreground components. Experiments demonstrate this hypothesis is true under a wide variety of operating conditions. A major focus involved the development of robust background estimation techniques that exploit this property. We desire estimation algorithms that can rapidly produce accurate background estimates and detection algorithms that can reliably detect background changes with minimal nuisance alarms. Another goal is to recognize unusual activities or foreground conditions that could signal an attack (e.g., large numbers of running people, people falling to the floor, etc.). Detection of background changes and/or unusual foreground activity can be used to alert security forces to the presence and location of potential threats. The results of this research are summarized in several MS Power-point slides included with this report.« less
  3. This report addresses the development of automated video-screening technology to assist security forces in protecting our homeland against terrorist threats. A threat of specific interest to this project is the covert placement and subsequent remote detonation of bombs (e.g., briefcase bombs) inside crowded public facilities. Different from existing video motion detection systems, the video-screening technology described in this report is capable of detecting changes in the static background of an otherwise, dynamic environment - environments where motion and human activities are persistent. Our goal was to quickly detect changes in the background - even under conditions when the background ismore » visible to the camera less than 5% of the time. Instead of subtracting the background to detect movement or changes in a scene, we subtracted the dynamic scene variations to produce an estimate of the static background. Subsequent comparisons of static background estimates are used to detect changes in the background. Detected changes can be used to alert security forces of the presence and location of potential threats. The results of this research are summarized in two MS Power-point presentations included with this report.« less
  4. Abstract not provided.
  5. Abstract not provided.
  6. Methods for segmenting the reflected light of an illumination source having a characteristic wavelength from background illumination (i.e. clutter) in structured lighting systems can comprise pulsing the light source used to illuminate a scene, pulsing the light source synchronously with the opening of a shutter in an imaging device, estimating the contribution of background clutter by interpolation of images of the scene collected at multiple spectral bands not including the characteristic wavelength and subtracting the estimated background contribution from an image of the scene comprising the wavelength of the light source and, placing a polarizing filter between the imaging devicemore » and the scene, where the illumination source can be polarized in the same orientation as the polarizing filter. Apparatus for segmenting the light of an illumination source from background illumination can comprise an illuminator, an image receiver for receiving images of multiple spectral bands, a processor for calculations and interpolations, and a polarizing filter.« less
  7. Sandia National Laboratories has been investigating the use of remotely operated weapon platforms in Department of Energy (DOE) facilities. These platforms offer significant force multiplication and enhancement by enabling near instantaneous response to attackers, increasing targeting accuracy, removing personnel from direct weapon fire, providing immunity to suppressive fire, and reducing security force size needed to effectively respond. Test results of the Telepresent Rapid Aiming Platform (TRAP) from Precision Remotes, Inc. have been exceptional and response from DOE sites and the U.S. Air Force is enthusiastic. Although this platform performs comparably to a trained marksman, the target acquisition speeds are upmore » to three times longer. TRAP is currently enslaved to a remote operator's joystick. Tracking moving targets with a joystick is difficult; it dependent upon target range, movement patterns, and operator skill. Even well-trained operators encounter difficulty tracking moving targets. Adding intelligent targeting capabilities on a weapon platform such as TRAP would significantly improve security force response in terms of effectiveness and numbers of responders. The initial goal of this project was to integrate intelligent targeting with TRAP. However, the unavailability of a TRAP for laboratory purposes drove the development of a new platform that simulates TRAP but has a greater operating range and is significantly faster to reposition.« less
  8. This report presents the results of experimental tests of a concept for using infrared (IR) photos to identify non-operational systems based on their glazing temperatures; operating systems have lower glazing temperatures than those in stagnation. In recent years thousands of new solar hot water (SHW) systems have been installed in some utility districts. As these numbers increase, concern is growing about the systems dependability because installation rebates are often based on the assumption that all of the SHW systems will perform flawlessly for a 20-year period. If SHW systems routinely fail prematurely, then the utilities will have overpaid for grid-energymore » reduction performance that is unrealized. Moreover, utilities are responsible for replacing energy for loads that failed SHW system were supplying. Thus, utilities are seeking data to quantify the reliability of SHW systems. The work described herein is intended to help meet this need. The details of the experiment are presented, including a description of the SHW collectors that were examined, the testbed that was used to control the system and record data, the IR camera that was employed, and the conditions in which testing was completed. The details of the associated analysis are presented, including direct examination of the video records of operational and stagnant collectors, as well as the development of a model to predict glazing temperatures and an analysis of temporal intermittency of the images, both of which are critical to properly adjusting the IR camera for optimal performance. Many IR images and a video are presented to show the contrast between operating and stagnant collectors. The major conclusion is that the technique has potential to be applied by using an aircraft fitted with an IR camera that can fly over an area with installed SHW systems, thus recording the images. Subsequent analysis of the images can determine the operational condition of the fielded collectors. Specific recommendations are presented relative to the application of the technique, including ways to mitigate and manage potential sources of error.« less
  9. The computer vision field has undergone a revolution of sorts in the past five years. Moore's law has driven real-time image processing from the domain of dedicated, expensive hardware, to the domain of commercial off-the-shelf computers. This thesis describes their work on the design, analysis and implementation of a Real-Time Shape from Silhouette Sensor (RT S{sup 3}). The system produces time-varying volumetric data at real-time rates (10-30Hz). The data is in the form of binary volumetric images. Until recently, using this technique in a real-time system was impractical due to the computational burden. In this thesis they review the previousmore » work in the field, and derive the mathematics behind volumetric calibration, silhouette extraction, and shape-from-silhouette. For the sensor implementation, they use four color camera/framegrabber pairs and a single high-end Pentium III computer. The color cameras were configured to observe a common volume. This hardware uses the RT S{sup 3} software to track volumetric motion. Two types of shape-from-silhouette algorithms were implemented and their relative performance was compared. They have also explored an application of this sensor to markerless motion tracking. In his recent review of work done in motion tracking Gavrila states that results of markerless vision based 3D tracking are still limited. The method proposed in this paper not only expands upon the previous work but will also attempt to overcome these limitations.« less
  10. This report summarizes the analytical and experimental efforts for the Laboratory Directed Research and Development (LDRD) project entitled ''Advancements in Sensing and Perception using Structured Lighting Techniques''. There is an ever-increasing need for robust, autonomous ground vehicles for counterterrorism and defense missions. Although there has been nearly 30 years of government-sponsored research, it is undisputed that significant advancements in sensing and perception are necessary. We developed an innovative, advanced sensing technology for national security missions serving the Department of Energy, the Department of Defense, and other government agencies. The principal goal of this project was to develop an eye-safe, robust,more » low-cost, lightweight, 3D structured lighting sensor for use in broad daylight outdoor applications. The market for this technology is wide open due to the unavailability of such a sensor. Currently available laser scanners are slow, bulky and heavy, expensive, fragile, short-range, sensitive to vibration (highly problematic for moving platforms), and unreliable for outdoor use in bright sunlight conditions. Eye-safety issues are a primary concern for currently available laser-based sensors. Passive, stereo-imaging sensors are available for 3D sensing but suffer from several limitations : computationally intensive, require a lighted environment (natural or man-made light source), and don't work for many scenes or regions lacking texture or with ambiguous texture. Our approach leveraged from the advanced capabilities of modern CCD camera technology and Center 6600's expertise in 3D world modeling, mapping, and analysis, using structured lighting. We have a diverse customer base for indoor mapping applications and this research extends our current technology's lifecycle and opens a new market base for outdoor 3D mapping. Applications include precision mapping, autonomous navigation, dexterous manipulation, surveillance and reconnaissance, part inspection, geometric modeling, laser-based 3D volumetric imaging, simultaneous localization and mapping (SLAM), aiding first responders, and supporting soldiers with helmet-mounted LADAR for 3D mapping in urban-environment scenarios. The technology developed in this LDRD overcomes the limitations of current laser-based 3D sensors and contributes to the realization of intelligent machine systems reducing manpower need.« less
Switch to Detail View for this search