Wrapping up deep CNN
We're going to wrap-up deep CNN by evaluating our model's accuracy. Last time, we set up the final font recognition model. Now, let's see how it does. In this section, we're going to learn how to handle dropouts during training. Then, we'll see what accuracy the model achieved. Finally, we'll visualize the weights to understand what the model learned.
Make sure you pick up in your IPython session after training in the previous model. Recall that when we trained our model, we used dropout
to remove some outputs.
While this helps with overfitting, during testing we want to make sure to use every neuron. This both increases the accuracy and makes sure that we don't forget to evaluate part of the model. And that's why in the following code lines we have, keep_prob
is 1.0
, to always keep all the neurons.
# Check accuracy on train set A = accuracy.eval(feed_dict={x: train, y_: onehot_train, keep_prob: 1.0}) ...