Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
PARALLEL NEURAL NETWORK TRAINING ON MULTISPERT
 

Summary: PARALLEL NEURAL NETWORK TRAINING ON
MULTI­SPERT
PHILIPP F ¨
ARBER AND KRSTE ASANOVI '
C
International Computer Science Institute,
Berkeley, CA 94704
Multi­Spert is a scalable parallel system built from multiple Spert­II nodes which
we have constructed to speed error backpropagation neural network training for
speech recognition research. We present the Multi­Spert hardware and software
architecture, and describe our implementation of two alternative parallelization
strategies for the backprop algorithm. We have developed detailed analytic models
of the two strategies which allow us to predict performance over a range of network
and machine parameters. The models' predictions are validated by measurements
for a prototype five node Multi­Spert system. This prototype achieves a neural
network training performance of over 530 million connection updates per second
(MCUPS) while training a realistic speech application neural network. The model
predicts that performance will scale to over 800 MCUPS for eight nodes.
1 Background and Motivation
The Spert­II board is a general purpose workstation accelerator based on the

  

Source: Asanović, Krste - Computer Science and Artificial Intelligence Laboratory & Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT)

 

Collections: Computer Technologies and Information Sciences