skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Training neural hardware with noisy components.


Abstract not provided.

; ; ;
Publication Date:
Research Org.:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Resource Relation:
Conference: Proposed for presentation at the International Joint Conference on Neural Networks held July 12-17, 2015 in Killarney, Ireland.
Country of Publication:
United States

Citation Formats

Rothganger, Fredrick, Evans, Brian Robert, Aimone, James Bradley, and Debenedictis, Erik. Training neural hardware with noisy components.. United States: N. p., 2015. Web.
Rothganger, Fredrick, Evans, Brian Robert, Aimone, James Bradley, & Debenedictis, Erik. Training neural hardware with noisy components.. United States.
Rothganger, Fredrick, Evans, Brian Robert, Aimone, James Bradley, and Debenedictis, Erik. 2015. "Training neural hardware with noisy components.". United States. doi:.
title = {Training neural hardware with noisy components.},
author = {Rothganger, Fredrick and Evans, Brian Robert and Aimone, James Bradley and Debenedictis, Erik},
abstractNote = {Abstract not provided.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = 2015,
month = 4

Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:
  • Abstract not provided.
  • A new generic approach for realizing neural networks (NN) is presented. The underlying principle of the new approach is to take advantage of the fact that signal processing in silicon is an advanced and mature technology, and to incorporate optics where silicon fails, namely in the interconnectivity problem. The basic idea is described. The system consists of two main subassemblies: a 2D Spatial Light Modulator (SLM) and an integrated circuit to which the authors shall henceforth refer to as the Neural Processor (NP). The synaptic efficacies matrix W is stored in the SLM. Thus by imaging the SLM contents ontomore » an array detector which serves as the input unit of the NP, W is loaded in parallel into the NP. The NP then updates the state of the network in parallel/semiparallel-synchronous/asynchronous manner (depending on the structure of the NP).« less
  • The paper reviews 2 very distinct suggestions for using artificial neural network hardware in power systems. The majority of our discussion concerns taking advantage of the hardware for fine-grained parallel computation. We also discuss our experience with recurrent artificial neural networks for load forecasting. A constant theme in power system analysis is faster computation. Sometimes the need for speed is to implement analysis on-line while other times the need is simply to perform more computation to explore a problem more thoroughly. Computation speed has historically been sought through algorithms. In more current times, this search has been supplemented with attemptsmore » to complete parallel computation. These parallel approaches have typically involved a few CPUs on a supercomputer or up to 32 in hypercube experiments. The application of SIMD computers designed for neural network simulations to the problem of power flow calculations is discussed. Clustering techniques are introduced to enable power flow calculation times that are independent of system size. Results of recurrent network electric load forecasting are also discussed.« less
  • This paper will present important limitations of hardware neural nets as opposed to biological neural nets (i.e. the real ones). The author starts by discussing neural structures and their biological inspirations, while mentioning the simplifications leading to artificial neural nets. Going further, the focus will be on hardware constraints. The author will present recent results for three different alternatives of implementing neural networks: digital, threshold gate, and analog, while the area and the delay will be related to neurons' fan-in and weights' precision. Based on all of these, it will be shown why hardware implementations cannot cope with their biologicalmore » inspiration with respect to their power of computation: the mapping onto silicon lacking the third dimension of biological nets. This translates into reduced fan-in, and leads to reduced precision. The main conclusion is that one is faced with the following alternatives: (1) try to cope with the limitations imposed by silicon, by speeding up the computation of the elementary silicon neurons; (2) investigate solutions which would allow one to use the third dimension, e.g. using optical interconnections.« less
  • Abstract not provided.