Chapter 11: Cross-Validation
The concept of keeping training data and testing data separate is sacrosanct in machine learning and statistics. You should never train a model and test its performance on the same data. Setting data aside for testing purposes has a downside, though: that data has valuable information that you would want to include in training. Cross-validation is a technique that's used to circumvent this problem.
You may be familiar with k-fold cross-validation, but if you are not, we will briefly cover it in this chapter. K-fold, however, will not work on time series. It requires that the data be independent, an assumption that time series data does not hold. An understanding of k-fold will help you learn how forward-chaining cross-validation works and why it is necessary for time series data.
After learning how to perform cross-validation in Prophet, you will learn how to speed up the computing of cross-validation through Prophet's ability to parallelize...