Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
PARALLEL NEURAL NETWORK TRAINING ON MULTI-SPERT
 

Summary: PARALLEL NEURAL NETWORK TRAINING ON
MULTI-SPERT
PHILIPP FARBER AND KRSTE ASANOVI C
International Computer Science Institute,
Berkeley, CA 94704
Multi-Spert is a scalable parallel system built from multiple Spert-II nodes which
we have constructed to speed error backpropagation neural network training for
speech recognition research. We present the Multi-Spert hardware and software
architecture, and describe our implementation of two alternative parallelization
strategiesfor the backpropalgorithm. We have developeddetailedanalyticmodels
of the two strategieswhich allow us to predictperformanceover a rangeof network
and machine parameters. The models' predictions are validated by measurements
for a prototype ve node Multi-Spert system. This prototype achieves a neural
network training performance of over 530 million connection updates per second
MCUPS while training a realistic speech application neural network. The model
predicts that performance will scale to over 800 MCUPS for eight nodes.
1 Background and Motivation
The Spert-II board is a general purpose workstation accelerator based on the
T0 vector microprocessor1. Originally, it was designed for the training of large
arti cial neural networks used in continuous speech recognition. For phoneme

  

Source: Asanovic, Krste - Computer Science and Artificial Intelligence Laboratory & Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT)

 

Collections: Computer Technologies and Information Sciences