Summary
In this chapter, we learned about the Markov chain, which is utilized to model special types of stochastic processes, such as problems wherein one can assume the entire past is encoded in the present, which in turn can be leveraged to determine the next (future) state. An application of the Markov chain in modeling time-series data was illustrated. The most common MCMC algorithm (Metropolis-Hastings) for sampling was also covered with code to illustrate. If a system exhibits non-stationary behavior (transition probability changes with time), then a Markov chain is not the appropriate model and a more complex model may be required to capture the behavior of the dynamic system.
With this chapter, we conclude the second part of the book. In the next chapter, we will explore fundamental optimization techniques, some of which are used in machine learning. We will touch upon evolutionary optimization, optimization in operations research, and that are leveraged in training neural...