50
Therefore, the current state of a Markov chain only affects the next state. The
are
transition probabilities. These transition probabilities do not depend on the time
. Since
the
are probabilities, we have
and since the
process remains in
∑
The transition probabilities are very important and it is useful to collect these transition
probabilities in a matrix. The
transition probability matrix is given by
(
)
The
i
th row of
, specifies the distribution of the process at
, given that it is in
state
at
. For example,
represents the probability of going to state 2 given that it is in state
2.
We now consider multi-step transition probabilities
( )
, which
are defined as follows
( )
(
|
) (
|
)
The calculation of multi-step transition probabilities is made easy from the following
Chapman-Kolmogorov lemma:
Lemma 3.1
Let
(
)
be a Markov chain with state space
{ }
. Then,
we have for the multi-step transition probabilities
( )
∑
( )
( )
Proof
:
( )
(
|
)
∑ (
|
)
51
∑
(
)
(
)
(
)
(
)
∑ (
|
) (
|
)
Now, using the
Markov property we obtain
( )
∑ (
|
) (
|
)
This implies,
( )
∑
( )
( )
The Chapman-Kolmogorov lemma can also be written in matrix form
We now turn to a discussion of the classification of states.
Some states will be visited over and over again, while others will only be visited a finite
number of times and never visited again. Let, for a state
{
}
Thus,
is the time of first visit to state
. Also let
(
|
)
which is the probability that the Markov chain will return to state
once it started there.
There are two possible
cases for the
’s:
1.
. This means that we are certain we will continuously return to state
(over and over
again). Such a state is called recurrent and will be visited infinitely many times.
2.
. This means that there is a positive probability of never returning to state
. Such a
state is called transient which will only be visited a finite amount of times.
We say that a state
is accessible from state
if there is
such that
( )
and
write
. State
is accessible from state
if with a finite number of steps we can come
52
from state
to
. Also, if
and
we then say that the states
and
communicate
and expressed as
. It can be shown that the communication relation is an equivalence
relation between the states of
.
This means, we have for all states
-
(reflexivity);
-
If
then also
(symmetry);
-
If
and
, then also
(transitivity).
A Markov chain is known as irreducible if there is only one communication class. This
means that all states communicate (the process can reach any
other state with positive
probability). This would imply that for an irreducible Markov chain with finite state space,
that all the states are recurrent.
The distribution
(
)
is called a stationary (or invariant or limiting) distribution
if
. This limiting distribution,
exists
if the Markov chain is
irreducible and all states are aperiodic (the greatest common divisor of the sets
{
( )
}
is one).
We now introduce Markov chains for a continuous state space.
Do'stlaringiz bilan baham: