skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: On sparsely connected optimal neural networks

Abstract

This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-insmore » will be suggested for F{sub n,m} functions.« less

Authors:
 [1];  [2]
  1. Los Alamos National Lab., NM (United States)
  2. Wayne State Univ., Detroit, MI (United States)
Publication Date:
Research Org.:
Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
Sponsoring Org.:
USDOE Assistant Secretary for Human Resources and Administration, Washington, DC (United States)
OSTI Identifier:
532531
Report Number(s):
LA-UR-97-1567; CONF-970984-1
ON: DE97008308; TRN: AHC29721%%81
DOE Contract Number:  
W-7405-ENG-36
Resource Type:
Conference
Resource Relation:
Conference: International conference on microelectronics for neural networks, evolution and fuzzy systems, Dresden (Germany), 24-26 Sep 1997; Other Information: PBD: [1997]
Country of Publication:
United States
Language:
English
Subject:
99 MATHEMATICS, COMPUTERS, INFORMATION SCIENCE, MANAGEMENT, LAW, MISCELLANEOUS; NEURAL NETWORKS; FUNCTIONS; INTEGRATED CIRCUITS; OPTIMIZATION; IMPLEMENTATION; DESIGN

Citation Formats

Beiu, V, and Draghici, S. On sparsely connected optimal neural networks. United States: N. p., 1997. Web.
Beiu, V, & Draghici, S. On sparsely connected optimal neural networks. United States.
Beiu, V, and Draghici, S. 1997. "On sparsely connected optimal neural networks". United States. https://www.osti.gov/servlets/purl/532531.
@article{osti_532531,
title = {On sparsely connected optimal neural networks},
author = {Beiu, V and Draghici, S},
abstractNote = {This paper uses two different approaches to show that VLSI- and size-optimal discrete neural networks are obtained for small fan-in values. These have applications to hardware implementations of neural networks, but also reveal an intrinsic limitation of digital VLSI technology: its inability to cope with highly connected structures. The first approach is based on implementing F{sub n,m} functions. The authors show that this class of functions can be implemented in VLSI-optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan-ins. In order to estimate the area (A) and the delay (T) of such networks, the following cost functions will be used: (i) the connectivity and the number-of-bits for representing the weights and thresholds--for good estimates of the area; and (ii) the fan-ins and the length of the wires--for good approximates of the delay. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on the size of fan-in 2 neural networks. They will generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan-in values. Finally, a size-optimal neural network of small constant fan-ins will be suggested for F{sub n,m} functions.},
doi = {},
url = {https://www.osti.gov/biblio/532531}, journal = {},
number = ,
volume = ,
place = {United States},
year = {1997},
month = {10}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: