Entropy based comparison of neural networks for classification
- Wayne State Univ., Detroit, MI (United States). Vision and Neural Networks Lab.
- Los Alamos National Lab., NM (United States)
In recent years, multilayer feedforward neural networks (NN) have been shown to be very effective tools in many different applications. A natural and essential step in continuing the diffusion of these tools in day by day use is their hardware implementation which is by far the most cost effective solution for large scale use. When the hardware implementation is contemplated, the issue of the size of the NN becomes crucial because the size is directly proportional with the cost of the implementation. In this light, any theoretical results which establish bounds on the size of a NN for a given problem is extremely important. In the same context, a particularly interesting case is that of the neural networks using limited integer weights. These networks are particularly suitable for hardware implementation because they need less space for storing the weights and the fixed point, limited precision arithmetic has much cheaper implementations in comparison with its floating point counterpart. This paper presents an entropy based analysis which completes, unifies and correlates results partially presented in [Beiu, 1996, 1997a] and [Draghici, 1997]. Tight bounds for real and integer weight neural networks are calculated.
- Research Organization:
- Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE, Washington, DC (United States)
- DOE Contract Number:
- W-7405-ENG-36
- OSTI ID:
- 468573
- Report Number(s):
- LA-UR-97-483; CONF-970598-1; ON: DE97004794; TRN: AHC29710%%68
- Resource Relation:
- Conference: WIRN VIETRI `97: 9. Italian workshop on neural nets, Salerno (Italy), 22-24 May 1997; Other Information: PBD: [1997]
- Country of Publication:
- United States
- Language:
- English
Similar Records
Neural network chips for trigger purposes in high energy physics
On sparsely connected optimal neural networks