Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
92 /MFlops/s, Ultra-Large-Scale Neural-Network Training on a PIII Cluster Gordon Bell award finalist, and student paper award finalist
 

Summary: 92 /MFlops/s, Ultra-Large-Scale Neural-Network Training on a PIII Cluster
Gordon Bell award finalist, and student paper award finalist
Keywords: neural-network, Linux cluster, matrix-multiply
Douglas Aberdeen (corresponding, presenting and student author) DOUGLAS.ABERDEEN@ANU.EDU.AU
Jonathan Baxter JONATHAN.BAXTER@ANU.EDU.AU
Research School of Information Sciences and Engineering, Australian National University, Canberra, Australia, 0200
Robert Edwards ROBERT.EDWARDS@ANU.EDU.AU
Department of Computer Science, Australian National University, Canberra, Australia, 0200
Abstract
Artificial neural networks with millions of ad-
justable parameters and a similar number of
training examples are a potential solution for dif-
ficult, large-scale pattern recognition problems in
areas such as speech and face recognition, clas-
sification of large volumes of web data, and fi-
nance. The bottleneck is that neural network
training involves iterative gradient descent and is
extremely computationally intensive. In this pa-
per we present a technique for distributed train-
ing of Ultra Large Scale Neural Networks1

  

Source: Aberdeen, Douglas - National ICT Australia & Computer Sciences Laboratory, Australian National University

 

Collections: Computer Technologies and Information Sciences