Interval neural networks
Traditional neural networks like multi-layered perceptrons (MLP) use example patterns, i.e., pairs of real-valued observation vectors, ({rvec x},{rvec y}), to approximate function {cflx f}({rvec x}) = {rvec y}. To determine the parameters of the approximation, a special version of the gradient descent method called back-propagation is widely used. In many situations, observations of the input and output variables are not precise; instead, we usually have intervals of possible values. The imprecision could be due to the limited accuracy of the measuring instrument or could reflect genuine uncertainty in the observed variables. In such situation input and output data consist of mixed data types; intervals and precise numbers. Function approximation in interval domains is considered in this paper. We discuss a modification of the classical backpropagation learning algorithm to interval domains. Results are presented with simple examples demonstrating few properties of nonlinear interval mapping as noise resistance and finding set of solutions to the function approximation problem.
- Research Organization:
- Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE, Washington, DC (United States)
- DOE Contract Number:
- W-7405-ENG-36
- OSTI ID:
- 81997
- Report Number(s):
- LA-UR-95-1223; CONF-9503135-3; ON: DE95012077
- Resource Relation:
- Conference: APIC `95, El Paso, TX (United States), Mar 1995; Other Information: PBD: [1995]
- Country of Publication:
- United States
- Language:
- English
Similar Records
The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data
Contrasting advantages of learning with random weights and backpropagation in non-volatile memory neural networks