Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network

  Advanced Search  

Causal KL: Evaluating Causal Discovery Rodney T. O'Donnell rodo@csse.monash.edu.au

Summary: Causal KL: Evaluating Causal Discovery
Rodney T. O'Donnell rodo@csse.monash.edu.au
Kevin B. Korb korb@csse.monash.edu.au
Lloyd Allison lloyd@csse.monash.edu.au
School of Information Technology
Monash University
Clayton, Vic, Australia
The two most commonly used criteria for assessing causal model discovery with artifi-
cial data are edit-distance and Kullback-Leibler divergence, measured from the true model
to the learned model. Both of these metrics maximally reward the true model. However,
we argue that they are both insufficiently discriminating in judging the relative merits of
false models. Edit distance, for example, fails to distinguish between strong and weak prob-
abilistic dependencies. KL divergence, on the other hand, rewards equally all statistically
equivalent models, regardless of their different causal claims. We propose an augmented
KL divergence, which we call Causal KL (CKL), which takes into account causal relation-
ships which distinguish between observationally equivalent models. Results are presented
for three variants of CKL, showing that Causal KL works well in practice.
Keywords: evaluating causal discovery, Kullback-Leibler divergence, edit distance, Causal


Source: Allison, Lloyd - Caulfield School of Information Technology, Monash University


Collections: Computer Technologies and Information Sciences