Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network

  Advanced Search  

A Comparison of Model Aggregation Methods for Zafer Barutcuoglu and Ethem Alpaydin

Summary: A Comparison of Model Aggregation Methods for
Zafer Barut¸cuoglu and Ethem Alpaydin
Department of Computer Engineering, Bogazi¸ci University, Istanbul, Turkey
zbarutcu@turk.net, alpaydin@boun.edu.tr
Abstract. Combining machine learning models is a means of improving overall
accuracy.Various algorithms have been proposed to create aggregate models from
other models, and two popular examples for classification are Bagging and Ad-
aBoost. In this paper we examine their adaptation to regression, and benchmark
them on synthetic and real-world data. Our experiments reveal that different types
of AdaBoost algorithms require different complexities of base models. They out-
perform Bagging at their best, but Bagging achieves a consistent level of success
with all base models, providing a robust alternative.
1 Introduction
Combining multiple instances of the same model type is a means for increasing robust-
ness to variance, reducing the overall sensitivity to different starting parameters and
noise. Two well-known algorithms for this purpose are Bagging [1] and AdaBoost [2,3].
Both have been analyzed for classification in much more detail than regression, possibly
due to the wider availability of real-life applications. Adapting classification algorithms
to regression raises some issues in this setting. In this paper we compare the Bagging


Source: Alpaydın, Ethem - Department of Computer Engineering, Bogaziçi University


Collections: Computer Technologies and Information Sciences