skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Integrate Light-Weight Deep Learning Tools with Internet of Things

Abstract

Today’s smart devices, such as a collision avoidance system, rely on detection of past and current eventsand then react to mitigate and reduce the severity of the ultimate consequence. The lack of proactivenessoften renders such a detection system less useful for an emergency. The ability to predict the future basedon what has been heard, sensed, observed at the current moment is of utmost importance for real-timesystems, such as autonomous driving, safety, and security monitoring and warning. It is quite commonthat a prediction system completely reconstructs (i.e., predicts) the raw image pixels of the future frame.However it does not necessarily reflect human intelligence where the future is in an abstract format, andcomplete visual reconstruction of a scene is overkill for decision making and too slow for smart devices,such as autonomous driving, collision avoidance system, and safety monitor.We developed a novel approach, i.e.,Deep Sensor, to execute learning tasks in a contextually-meaningfullatent space, generate the abstract representations (embeddings) of image and frames, and then perform clas-sification among the learned sequence of embeddings. The objectives in this proposal are to integrate recentsuccesses in neural networks, smart sensors and the Internet of Things (IoT) and design a cost-effectivereal-time prediction and decision-making system that learns frommore » the sequence of past training image andvideo frames and predicts the real data patterns and prescribes the future actions.We designed multiple deep learning network stacks that learned the latent information from imagesand video frames and constructed the embedding vectors. Subsequently, we performed supervised andunsupervised learning with extracted embedding vectors, in particular using the Long Short-Term Memory(LSTM) recurrent networks and convolutional neural networks (CNN) to classify the input images andvideo frames and predict future actions. We implemented the novel idea of improving convolutional neuralnetworks, i.e., to plug an autoencoder into the Generative Adversarial Networks (GAN) and use game theoryprinciples to train the CNN-based generator and its counterpart alternatively to reach the Nash equilibriumwhere the prediction results appear to be as realistic as ground truth. We successfully incorporated ourprototypedeep sensorinto multiple use scenarios of the data analysis and machine learning algorithms atthe edge computing environment and combined hardware accelerator and algorithmic optimization to reduceinference latency by one order-of-magnitude and thereby meet the real-time requirements. In particular,we designed an autonomous driving vehicle with an NVidia embedded system that can self-drive for twominutes in a hallway with turns, obstacles, and entrances. We integrated our designed deep neural networksinto NSLS-II at BNL to recognize whether a chip of video taking at material science experiments containsan exciting event, such as crystallization. We also applied the GAN-enabled auto-encoder to X-ray imagingat NSLS-II to reduce the noise and artifacts that are caused by sample vibration and generated high-qualify3-D tomography.We outline out contributions as four accomplishments: autonomous driving, In-operando experimentvideo classifications and events detections, Innovative algorithms for image alignment and noise removalfor high-resolution tomography reconstructions based on deep learning, and lightweight deep learning onthe Internet of Things for Low-cost skin cancer detection.« less

Authors:
;
Publication Date:
Research Org.:
Sunrise Technology, Inc.
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1571825
Report Number(s):
DE-SC0018455
DOE Contract Number:  
SC0018455
Type / Phase:
SBIR (Phase I)
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English

Citation Formats

Sun, Yu, and Yu, Dantong. Integrate Light-Weight Deep Learning Tools with Internet of Things. United States: N. p., 2019. Web.
Sun, Yu, & Yu, Dantong. Integrate Light-Weight Deep Learning Tools with Internet of Things. United States.
Sun, Yu, and Yu, Dantong. Mon . "Integrate Light-Weight Deep Learning Tools with Internet of Things". United States.
@article{osti_1571825,
title = {Integrate Light-Weight Deep Learning Tools with Internet of Things},
author = {Sun, Yu and Yu, Dantong},
abstractNote = {Today’s smart devices, such as a collision avoidance system, rely on detection of past and current eventsand then react to mitigate and reduce the severity of the ultimate consequence. The lack of proactivenessoften renders such a detection system less useful for an emergency. The ability to predict the future basedon what has been heard, sensed, observed at the current moment is of utmost importance for real-timesystems, such as autonomous driving, safety, and security monitoring and warning. It is quite commonthat a prediction system completely reconstructs (i.e., predicts) the raw image pixels of the future frame.However it does not necessarily reflect human intelligence where the future is in an abstract format, andcomplete visual reconstruction of a scene is overkill for decision making and too slow for smart devices,such as autonomous driving, collision avoidance system, and safety monitor.We developed a novel approach, i.e.,Deep Sensor, to execute learning tasks in a contextually-meaningfullatent space, generate the abstract representations (embeddings) of image and frames, and then perform clas-sification among the learned sequence of embeddings. The objectives in this proposal are to integrate recentsuccesses in neural networks, smart sensors and the Internet of Things (IoT) and design a cost-effectivereal-time prediction and decision-making system that learns from the sequence of past training image andvideo frames and predicts the real data patterns and prescribes the future actions.We designed multiple deep learning network stacks that learned the latent information from imagesand video frames and constructed the embedding vectors. Subsequently, we performed supervised andunsupervised learning with extracted embedding vectors, in particular using the Long Short-Term Memory(LSTM) recurrent networks and convolutional neural networks (CNN) to classify the input images andvideo frames and predict future actions. We implemented the novel idea of improving convolutional neuralnetworks, i.e., to plug an autoencoder into the Generative Adversarial Networks (GAN) and use game theoryprinciples to train the CNN-based generator and its counterpart alternatively to reach the Nash equilibriumwhere the prediction results appear to be as realistic as ground truth. We successfully incorporated ourprototypedeep sensorinto multiple use scenarios of the data analysis and machine learning algorithms atthe edge computing environment and combined hardware accelerator and algorithmic optimization to reduceinference latency by one order-of-magnitude and thereby meet the real-time requirements. In particular,we designed an autonomous driving vehicle with an NVidia embedded system that can self-drive for twominutes in a hallway with turns, obstacles, and entrances. We integrated our designed deep neural networksinto NSLS-II at BNL to recognize whether a chip of video taking at material science experiments containsan exciting event, such as crystallization. We also applied the GAN-enabled auto-encoder to X-ray imagingat NSLS-II to reduce the noise and artifacts that are caused by sample vibration and generated high-qualify3-D tomography.We outline out contributions as four accomplishments: autonomous driving, In-operando experimentvideo classifications and events detections, Innovative algorithms for image alignment and noise removalfor high-resolution tomography reconstructions based on deep learning, and lightweight deep learning onthe Internet of Things for Low-cost skin cancer detection.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {9}
}

Technical Report:
This technical report may be released as soon as October 25, 2023
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that may hold this item. Keep in mind that many technical reports are not cataloged in WorldCat.

Save / Share: