 
Summary: Chapter 6
Continuous Time Markov Chains
In Chapter 3, we considered stochastic processes that were discrete in both time and
space, and that satisfied the Markov property: the behavior of the future of the
process only depends upon the current state and not any of the rest of the past. Here
we generalize such models by allowing for time to be continuous. As before, we will
always take our state space to be either finite or countably infinite.
A good mental image to have when first encountering continuous time Markov
chains is simply a discrete time Markov chain in which transitions can happen at any
time. We will see in the next section that this image is a very good one, and that the
Markov property will imply that the jump times, as opposed to simply being integers
as in the discrete time setting, will be exponentially distributed.
6.1 Construction and Basic Definitions
We wish to construct a continuous time process on some countable state space S
that satisfies the Markov property. That is, letting FX(s) denote all the information
pertaining to the history of X up to time s, and letting j S and s t, we want
P{X(t) = j  FX(s)} = P{X(t) = j  X(s)}. (6.1)
We also want the process to be timehomogeneous so that
P{X(t) = j  X(s)} = P{X(t  s) = j  X(0)}. (6.2)
We will call any process satisfying (6.1) and (6.2) a timehomogeneous continuous
