Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network

  Advanced Search  

Artificial Intelligence Review 11: 115132, 1997. 115 c 1997 Kluwer Academic Publishers. Printed in the Netherlands.

Summary: Artificial Intelligence Review 11: 115­132, 1997. 115
c 1997 Kluwer Academic Publishers. Printed in the Netherlands.
Voting over Multiple Condensed Nearest Neighbors
Department of Computer Engineering, Bogazic¸i University, TR-80815 Istanbul, Turkey
E-mail: alpaydin@boun.edu.tr
Abstract. Lazy learning methods like the k-nearest neighbor classifier require storing the
whole training set and may be too costly when this set is large. The condensed nearest
neighbor classifier incrementally stores a subset of the sample, thus decreasing storage and
computation requirements. We propose to train multiple such subsets and take a vote over
them, thus combining predictions from a set of concept descriptions. We investigate two
voting schemes: simple voting where voters have equal weight and weighted voting where
weights depend on classifiers' confidences in their predictions. We consider ways to form such
subsets for improved performance: When the training set is small, voting improves performance
considerably. If the training set is not small, then voters converge to similar solutions and we
do not gain anything by voting. To alleviate this, when the training set is of intermediate size,
we use bootstrapping to generate smaller training sets over which we train the voters. When
the training set is large, we partition it into smaller, mutually exclusive subsets and then train
the voters. Simulation results on six datasets are reported with good results. We give a review
of methods for combining multiple learners. The idea of taking a vote over multiple learners


Source: Alpaydın, Ethem - Department of Computer Engineering, Bogaziçi University


Collections: Computer Technologies and Information Sciences