skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Integrate Light-Weight Deep Learning Tools with Internet of Things

Technical Report ·
OSTI ID:1571825

Today’s smart devices, such as a collision avoidance system, rely on detection of past and current eventsand then react to mitigate and reduce the severity of the ultimate consequence. The lack of proactivenessoften renders such a detection system less useful for an emergency. The ability to predict the future basedon what has been heard, sensed, observed at the current moment is of utmost importance for real-timesystems, such as autonomous driving, safety, and security monitoring and warning. It is quite commonthat a prediction system completely reconstructs (i.e., predicts) the raw image pixels of the future frame.However it does not necessarily reflect human intelligence where the future is in an abstract format, andcomplete visual reconstruction of a scene is overkill for decision making and too slow for smart devices,such as autonomous driving, collision avoidance system, and safety monitor.We developed a novel approach, i.e.,Deep Sensor, to execute learning tasks in a contextually-meaningfullatent space, generate the abstract representations (embeddings) of image and frames, and then perform clas-sification among the learned sequence of embeddings. The objectives in this proposal are to integrate recentsuccesses in neural networks, smart sensors and the Internet of Things (IoT) and design a cost-effectivereal-time prediction and decision-making system that learns from the sequence of past training image andvideo frames and predicts the real data patterns and prescribes the future actions.We designed multiple deep learning network stacks that learned the latent information from imagesand video frames and constructed the embedding vectors. Subsequently, we performed supervised andunsupervised learning with extracted embedding vectors, in particular using the Long Short-Term Memory(LSTM) recurrent networks and convolutional neural networks (CNN) to classify the input images andvideo frames and predict future actions. We implemented the novel idea of improving convolutional neuralnetworks, i.e., to plug an autoencoder into the Generative Adversarial Networks (GAN) and use game theoryprinciples to train the CNN-based generator and its counterpart alternatively to reach the Nash equilibriumwhere the prediction results appear to be as realistic as ground truth. We successfully incorporated ourprototypedeep sensorinto multiple use scenarios of the data analysis and machine learning algorithms atthe edge computing environment and combined hardware accelerator and algorithmic optimization to reduceinference latency by one order-of-magnitude and thereby meet the real-time requirements. In particular,we designed an autonomous driving vehicle with an NVidia embedded system that can self-drive for twominutes in a hallway with turns, obstacles, and entrances. We integrated our designed deep neural networksinto NSLS-II at BNL to recognize whether a chip of video taking at material science experiments containsan exciting event, such as crystallization. We also applied the GAN-enabled auto-encoder to X-ray imagingat NSLS-II to reduce the noise and artifacts that are caused by sample vibration and generated high-qualify3-D tomography.We outline out contributions as four accomplishments: autonomous driving, In-operando experimentvideo classifications and events detections, Innovative algorithms for image alignment and noise removalfor high-resolution tomography reconstructions based on deep learning, and lightweight deep learning onthe Internet of Things for Low-cost skin cancer detection.

Research Organization:
Sunrise Technology, Inc.
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
DOE Contract Number:
SC0018455
OSTI ID:
1571825
Type / Phase:
SBIR (Phase I)
Report Number(s):
DE-SC0018455
Country of Publication:
United States
Language:
English