Summary
We began this chapter with a discussion of why k-fold cross-validation was developed in traditional machine learning applications, and we then learned why it will not work with time series. You then learned about forward-chaining, also called rolling-origin cross-validation, for use with time series data.
You learned the keywords of initial, horizon, period, and cutoffs, which are used to define your cross-validation parameters, and you learned how to implement them in Prophet. Finally, you learned the different options Prophet has for parallelization, in order to speed up model evaluation.
These techniques provide you with a statistically robust way to evaluate and compare models. By isolating the data used in training and testing, you remove any bias in the process and can be more certain that your model will perform well when making new predictions about the future.
In the next chapter, you'll apply what you learned here to measure your model's performance...