On limited fan-in optimal neural networks
- Los Alamos National Lab., NM (United States)
- Wayne State Univ., Detroit, MI (United States). Vision and Neural Networks Lab.
Because VLSI implementations do not cope well with highly interconnected nets the area of a chip growing as the cube of the fan-in--this paper analyses the influence of limited fan in on the size and VLSI optimality of such nets. Two different approaches will show that VLSI- and size-optimal discrete neural networks can be obtained for small (i.e. lower than linear) fan-in values. They have applications to hardware implementations of neural networks. The first approach is based on implementing a certain sub class of Boolean functions, IF{sub n,m} functions. The authors will show that this class of functions can be implemented in VLSI optimal (i.e., minimizing AT{sup 2}) neural networks of small constant fan ins. The second approach is based on implementing Boolean functions for which the classical Shannon`s decomposition can be used. Such a solution has already been used to prove bounds on neural networks with fan-ins limited to 2. They generalize the result presented there to arbitrary fan-in, and prove that the size is minimized by small fan in values, while relative minimum size solutions can be obtained for fan-ins strictly lower than linear. Finally, a size-optimal neural network having small constant fan-ins will be suggested for IF{sub n,m} functions.
- Research Organization:
- Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE Assistant Secretary for Human Resources and Administration, Washington, DC (United States)
- DOE Contract Number:
- W-7405-ENG-36
- OSTI ID:
- 654140
- Report Number(s):
- LA-UR-97-4314; CONF-971235-; ON: DE98004666; TRN: AHC2DT05%%228
- Resource Relation:
- Conference: 4. Brasilian symposium on neural networks, Boifnia (Brazil), 3-5 Dec 1997; Other Information: PBD: Mar 1998
- Country of Publication:
- United States
- Language:
- English
Similar Records
Small fan-in is beautiful
Deeper and sparser nets are optimal