Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 1217, 2007 Optimizing 0/1 Loss for Perceptrons
 

Summary: Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 1217, 2007
Optimizing 0/1 Loss for Perceptrons
by Random Coordinate Descent
Ling Li and Hsuan-Tien Lin
Abstract--The 0/1 loss is an important cost function for
perceptrons. Nevertheless it cannot be easily minimized by
most existing perceptron learning algorithms. In this paper, we
propose a family of random coordinate descent algorithms to
directly minimize the 0/1 loss for perceptrons, and prove their
convergence. Our algorithms are computationally efficient, and
usually achieve the lowest 0/1 loss compared with other algo-
rithms. Such advantages make them favorable for nonseparable
real-world problems. Experiments show that our algorithms are
especially useful for ensemble learning, and could achieve the
lowest test error for many complex data sets when coupled with
AdaBoost.
I. INTRODUCTION
The perceptron was first introduced by Rosenblatt [1] as a
probabilistic model for information processing in the brain.
It is simply a linear threshold classifier, which can be thought

  

Source: Abu-Mostafa, Yaser S. - Department of Mechanical Engineering & Computer Science Department, California Institute of Technology

 

Collections: Computer Technologies and Information Sciences