This project was all about building a CNN classifier to classify handwritten digits better than we did in Chapter 2, Training NN for Prediction Using Regression, with a multilayer Perceptron.
Our deep convolution neural network classifier with max pooling and dropout hit 99.01% accuracy on a test set of 10,000 images/digits. This is good. This is almost 12% better than our multilayer Perceptron model.
However, there are some implications. What are the implications of this accuracy? It is important that we understand this. Just like we did in Chapter 2, Training NN for Prediction Using Regression, let's calculate the incidence of an error occurring that would result in a customer service issue.
Just to refresh our memory, in this hypothetical use case, we assumed that the restaurant has an average of 30 tables at each location, and that those tables turn over two...