Introducing polynomial regression
In two dimensions, where we have a predictor and an outcome, linear modeling is all about finding the best line that approximates your data. In three dimensions (two predictors and one outcome), the idea is then to find the best plane, or the best flat surface, that approximates your data. In the N
dimension, the surface becomes an hyperplane, but the goal is always the same – to find the hyperplane of dimension N-1 that gives the best approximation for regression or that separates the classes the best for classification. That hyperplane is always flat.
Coming back to the very non-linear two-dimensional dataset we created, it is obvious that no line can properly approximate the relation between the predictor and the outcome. There are many different methods to model non-linear data, including polynomial regression, step functions, splines, and Generalized additive models (GAM). See Chapter 7 of An Introduction to Statistical Learning by James, Witten, Hastie...