Skip to content

Markov Chains

A Markov chain is a sequence of random variables (a.k.a stochastic process) denoted by X0,X1,...,Xt with the Markov property that the future state depends only on the current state and not on the past states. Xk is a random variable representing the state at time k. Formally, this is expressed as:

P(Xk+1|Xk,Xk1,...,X0)=P(Xk+1|Xk)

Transition Matrix

Markov chain can be represented by a transition matrix P={pij}, where pij is the probability of transitioning from state i to state j. The sum of each row in the transition matrix must equal 1.

TIP

The probability of path P(X1=s1,X2=s2...Xn=sn|X0=s0)=ps0,s1ps1,s2...psn1,sn

States

  • Recurrent state :
  • Transient state :.
  • Absorbing state : State i is absorbing if it is impossible to leave this state (pii=1, pij=0).

Stationary Distribution

The vector π=(πi)iS is a stationary distribution of a Markov chain if the following holds:

  1. Pπ=π
  2. iSπi=1
  3. πi0

As a stationary distribution satisfies Pπ=π, pi is an eigenvector of P with eigenvalue 1 (largest eigenvalue).

Absorption Probability

For an absorption state s, the probability to reach this state from state i is given by: ai. We can compute the absorption probabilities by solving the following system of equations:

  1. as=1 (if we are in the absorbing state, we are already there)
  2. ai=j=1aipij (for all transient states i). This is equivalent to dot product of vector a and row i of transition matrix P.
  3. ai=0 (for other absorbing states i). This means that we cannot reach the i'th absorbing state from absorption state s.