Understanding Loss Functions in Linear Regression
It is important to know the effect of loss
functions in algorithm convergence. Here we will illustrate how the L1 and L2 loss
functions affect convergence in linear regression.
Getting ready
We will use the same iris dataset as in the prior recipe, but we will change our loss
functions and learning rates to see how convergence changes.
How to do it…
- The start of the program is unchanged from before until we get to our
loss
function. We load the necessary libraries, start a session, load the data, create placeholders, and define our variables and model. One thing to note is that we are pulling out our learning rate and model iterations. We are doing this because we want to show the effect of quickly changing these parameters. Use the following code:import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from sklearn import datasets sess = tf.Session() iris = datasets.load_iris() x_vals = np.array([x[3] for x in iris...