 
Summary: An Introduction to
Causal Inference
Richard Scheines
In Causation, Prediction, and Search (CPS hereafter), Peter Spirtes, Clark Glymour and I
developed a theory of statistical causal inference. In his presentation at the Notre Dame
conference (and in his paper, this volume), Glymour discussed the assumptions on which this
theory is built, traced some of the mathematical consequences of the assumptions, and pointed to
situations in which the assumptions might fail. Nevertheless, many at Notre Dame found the
theory difficult to understand and/or assess. As a result I was asked to write this paper to provide
a more intuitive introduction to the theory. In what follows I shun almost all formality and avoid
the numerous and complicated qualifiers that typically accompany definitions or important
philosophical concepts. They can be all be found in Glymour's paper or in CPS, which are clear
although sometimes dense. Here I attempt to fix intuitions by highlighting a few of the essential
ideas and by providing extremely simple examples throughout.
The route I take is a response to the core concern of many I talked to at the Notre Dame
conference. Our techniques take statistical data and output sets of directed graphs. Most
everyone saw how that workedbut they could not easily assess the additional assumptions
necessary to give such output a causal interpretation, that is an interpretation that would inform
us about how systems would respond to interventions. I will try to present in the simplest terms
the assumptions that allow us to move from probabilistic independence relations to the kind of
