Building classifiers using Bayesian networks
Conference
·
OSTI ID:430815
- Stanford Univ., CA (United States)
- Rockwell Science Center, Palo Alto, CA (United States)
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods.
- OSTI ID:
- 430815
- Report Number(s):
- CONF-960876--; CNN: Grant IRI-95-03109
- Country of Publication:
- United States
- Language:
- English
Similar Records
Scaling up the accuracy of Naive-Bayes classifiers: A decision-tree hybrid
Learning limited dependence Bayesian classifiers
Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness
Conference
·
Mon Dec 30 23:00:00 EST 1996
·
OSTI ID:421279
Learning limited dependence Bayesian classifiers
Conference
·
Mon Dec 30 23:00:00 EST 1996
·
OSTI ID:421307
Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness
Conference
·
Wed Sep 19 00:00:00 EDT 2007
·
OSTI ID:929189