One way to measure the success or failure of the real equation is by computing the uniform norm of the difference between the original function and the interpolation. In this particular case, that norm is close to 2.0. We could approximate this value by performing the following computation on a large set of points in the domain:
However, this is a crude approximation of the actual error. To compute the true norm, we need a mechanism that calculates the actual maximum value of a function over a finite interval, rather than over a discrete set of points. To perform this operation for the current example, we will use the minimize_scalar routine from the scipy.optimize module.