We will start with a simple, artificial, linear regression problem to set the scene. In this problem, we construct an artificial dataset where we first create, and hence, know, the line to which we are fitting, but then we'll use TensorFlow to find this line.
We do this as follows—after our imports and initialization, we enter a loop. Inside this loop, we calculate the overall loss (defined as the mean squared error over our dataset, y, of points). We then take the derivative of this loss with respect to our weights and bias. This produces values that we can use to adjust our weights and bias to lower the loss; this is known as gradient descent. By repeating this loop a number of times (technically called epochs), we can lower our loss to the point where it is as low as it can go, and we can use our trained model to make predictions...