skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems

Abstract

Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on the pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.

Authors:
 [1];  [1];  [2];  [2]
  1. Univ. of Central Florida, Orlando, FL (United States)
  2. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1361358
Report Number(s):
ORNL/LTR-2017/118
45304036B
DOE Contract Number:
AC05-00OR22725
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; statistical hypothesis checking; image analysis; adversarial counter-examples

Citation Formats

Raj, Sunny, Jha, Sumit Kumar, Pullum, Laura L., and Ramanathan, Arvind. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems. United States: N. p., 2017. Web. doi:10.2172/1361358.
Raj, Sunny, Jha, Sumit Kumar, Pullum, Laura L., & Ramanathan, Arvind. Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems. United States. doi:10.2172/1361358.
Raj, Sunny, Jha, Sumit Kumar, Pullum, Laura L., and Ramanathan, Arvind. Mon . "Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems". United States. doi:10.2172/1361358. https://www.osti.gov/servlets/purl/1361358.
@article{osti_1361358,
title = {Statistical Hypothesis Testing using CNN Features for Synthesis of Adversarial Counterexamples to Human and Object Detection Vision Systems},
author = {Raj, Sunny and Jha, Sumit Kumar and Pullum, Laura L. and Ramanathan, Arvind},
abstractNote = {Validating the correctness of human detection vision systems is crucial for safety applications such as pedestrian collision avoidance in autonomous vehicles. The enormous space of possible inputs to such an intelligent system makes it difficult to design test cases for such systems. In this report, we present our tool MAYA that uses an error model derived from a convolutional neural network (CNN) to explore the space of images similar to a given input image, and then tests the correctness of a given human or object detection system on such perturbed images. We demonstrate the capability of our tool on the pre-trained Histogram-of-Oriented-Gradients (HOG) human detection algorithm implemented in the popular OpenCV toolset and the Caffe object detection system pre-trained on the ImageNet benchmark. Our tool may serve as a testing resource for the designers of intelligent human and object detection systems.},
doi = {10.2172/1361358},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Mon May 01 00:00:00 EDT 2017},
month = {Mon May 01 00:00:00 EDT 2017}
}

Technical Report:

Save / Share:
  • The behavior of a group of people depends strongly on the interaction of personal (individual) traits with the collective moods of the group as a whole. We have developed a computer program to model circumstances of this nature with recognition of the crucial role played by such psychological properties as fear, excitement, peer pressure, moral outrage, and anger, together with the distribution among participants of intrinsic susceptibilities to these emotions. This report extends previous work to consider two groups of people in adversarial encounter, for example, two platoons in battle, a SWAT team against rioting prisoners, or opposing mobs ofmore » different ethnic backgrounds. Closely related applications of the modeling include prowling groups of predatory animals interacting with herds of prey, and even the ''slow-mob'' behavior of social or political units in their response to legislative or judicial activities. Examples in this present study emphasize battlefield encounters, with each group characterizzed by its susceptibilities, skills, and other manifestions of both intentional and accidental circumstances. Specifically, we investigate the relative importance of leadership, camaraderie, training level (i.e. skill in firing weapons), bravery, excitability, and dedication in the battle performance of personnel with random or specified distributions of capabilities and susceptibilities in these various regards. The goal is to exhibit the probable outcome of these encounters in circumstances involving specified battle goals and distributions of terrain impediments. A collateral goal is to provide a real-time hands-on battle simulator into which a leadership trainee can insert his own interactive command.« less
  • Night vision devices, such image intensifiers and infrared imagers, are readily available to a host of nations, organizations, and individuals through international commerce. Once the trademark of special operations units, these devices are widely advertised to ''turn night into day''. In truth, they cannot accomplish this formidable task, but they do offer impressive enhancement of vision in limited light scenarios through electronically generated images. Image intensifiers and infrared imagers are both electronic devices for enhancing vision in the dark. However, each is based upon a totally different physical phenomenon. Image intensifiers amplify the available light energy whereas infrared imagers detectmore » the thermal energy radiated from all objects. Because of this, each device operates from energy which is present in a different portion of the electromagnetic spectrum. This leads to differences in the ability of each device to detect and/or identify objects. This report is a compilation of the available information on both state-of-the-art image intensifiers and infrared imagers. Image intensifiers developed in the United States, as well as some foreign made image intensifiers, are discussed. Image intensifiers are categorized according to their spectral response and sensitivity using the nomenclature of GEN I, GEN II, and GEN III. As the first generation of image intensifiers, GEN I, were large and of limited performance, this report will deal with only GEN II and GEN III equipment. Infrared imagers are generally categorized according to their spectral response, sensor materials, and related sensor operating temperature using the nomenclature Medium Wavelength Infrared (MWIR) Cooled and Long Wavelength Infrared (LWIR) Uncooled. MWIR Cooled refers to infrared imagers which operate in the 3 to 5 {micro}m wavelength electromagnetic spectral region and require either mechanical or thermoelectric coolers to keep the sensors operating at 77 K. LWIR Uncooled refers to infrared imagers which operate in the 8 to 12 {micro}m wavelength electromagnetic spectral region and do not require cooling below room temperature. Both commercial and military infrared sensors of these two types are discussed.« less
  • Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less