Discrete-time Markov chain
For a discrete-time Markov process, while in continuous time is replaced by where runs until infinity. Given the present state, past and future states are independent in a Markov chain, which in turn means that the future is only dependent on the present. In the following subsections, we will learn about the transition matrix and an application of the Markov chain in time-series data for short-term forecasting.
Transition probability
The transition probabilities between Markov states are captured in a state transition matrix. The dimension of the transition matrix is determined by the number of states in the state space. Every state is included as a row and a column, and each cell in the matrix gives the probability of transition from its row’s state to its column’s state, as shown in Figure 8.2. In order to forecast one step ahead, one must know the transition matrix and the current state. The transition probability (matrix element...