Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network

  Advanced Search  

MML Inference of Single-layer Neural Networks Enes Makalic, Lloyd Allison and David L. Dowe

Summary: MML Inference of Single-layer Neural Networks
Enes Makalic, Lloyd Allison and David L. Dowe
Inference of the optimal neural network architecture for a speci c dataset is a long standing
and di∆cult problem. Although a number of researchers have proposed various model selection
procedures, the problem still remains largely unsolved. The architecture of the neural network,
(the number of hidden layers, hidden neurons, inputs, etc.) directly a ects its performance. A
network that is too simple will not learn the problem su∆ciently well, resulting in poor performance.
Conversely, a complex network can over t and exhibit poor generalisation capabilities. This paper
introduces a novel selection criterion, based on Minimum Message Length (MML), for inference
of single hidden layer, fully-connected, feedforward neural networks. The criterion performance is
demonstrated on several arti cial and real datasets. Furthermore, the MML criterion is compared
against an MDL-based criterion and variations of the Akaike's Information Criterion (AIC) and
Bayesian Information Criterion (BIC). In all tests considered, the MML criterion never over tted
and performed as well as, and often better than other model selection criteria.
1 Introduction
Arti cial neural networks are an e∆cient tool for classi cation and regression problems. At the present
time the most popular neural network type in use is the Multilayer Perceptron (MLP) [11, 10]. MLPs
are characterised by the number of hidden layers, hidden neurons and connections between the layers.
The architecture of a network must be determined separately for each problem - there is no single,


Source: Allison, Lloyd - Caulfield School of Information Technology, Monash University


Collections: Computer Technologies and Information Sciences