One of the most popular examples regarding multiclass classification is to label the images of handwritten digits. The classes, or labels, in this example are {0,1,2,3,4,5,6,7,8,9}. The dataset that we are going to use is popularly known as MNIST and is available from the following link: http://yann.lecun.com/exdb/mnist/. The MNIST dataset has 60,000 images for training and 10,000 images for testing. The images in the dataset appear as follows:
- First, we must import datasetslib, a library that was written by us to help with examples in this book (available as a submodule of this book's GitHub repository):
DSLIB_HOME = '../datasetslib'
import sys
if not DSLIB_HOME in sys.path:
sys.path.append(DSLIB_HOME)
%reload_ext autoreload
%autoreload 2
import datasetslib as dslib
from datasetslib.utils import imutil
from datasetslib.utils import nputil
from datasetslib.mnist import MNIST
- Set the path to the datasets folder in our home directory, which is where we want all of the datasets to be stored:
import os
datasets_root = os.path.join(os.path.expanduser('~'),'datasets')
- Get the MNIST data using our datasetslib and print the shapes to ensure that the data is loaded properly:
mnist=MNIST()
x_train,y_train,x_test,y_test=mnist.load_data()
mnist.y_onehot = True
mnist.x_layout = imutil.LAYOUT_NP
x_test = mnist.load_images(x_test)
y_test = nputil.onehot(y_test)
print('Loaded x and y')
print('Train: x:{}, y:{}'.format(len(x_train),y_train.shape))
print('Test: x:{}, y:{}'.format(x_test.shape,y_test.shape))
- Define the hyperparameters for training the model:
learning_rate = 0.001
n_epochs = 5
mnist.batch_size = 100
- Define the placeholders and parameters for our simple model:
# define input images
x = tf.placeholder(dtype=tf.float32, shape=[None, mnist.n_features])
# define output labels
y = tf.placeholder(dtype=tf.float32, shape=[None, mnist.n_classes])
# model parameters
w = tf.Variable(tf.zeros([mnist.n_features, mnist.n_classes]))
b = tf.Variable(tf.zeros([mnist.n_classes]))
- Define the model with logits and y_hat:
logits = tf.add(tf.matmul(x, w), b)
y_hat = tf.nn.softmax(logits)
- Define the loss function:
epsilon = tf.keras.backend.epsilon()
y_hat_clipped = tf.clip_by_value(y_hat, epsilon, 1 - epsilon)
y_hat_log = tf.log(y_hat_clipped)
cross_entropy = -tf.reduce_sum(y * y_hat_log, axis=1)
loss_f = tf.reduce_mean(cross_entropy)
- Define the optimizer function:
optimizer = tf.train.GradientDescentOptimizer
optimizer_f = optimizer(learning_rate=learning_rate).minimize(loss_f)
- Define the function to check the accuracy of the trained model:
predictions_check = tf.equal(tf.argmax(y_hat, 1), tf.argmax(y, 1))
accuracy_f = tf.reduce_mean(tf.cast(predictions_check, tf.float32))
- Run the training loop for each epoch in a TensorFlow session:
n_batches = int(60000/mnist.batch_size)
with tf.Session() as tfs:
tf.global_variables_initializer().run()
for epoch in range(n_epochs):
mnist.reset_index()
for batch in range(n_batches):
x_batch, y_batch = mnist.next_batch()
feed_dict={x: x_batch, y: y_batch}
batch_loss,_ = tfs.run([loss_f, optimizer_f],feed_dict=feed_dict )
#print('Batch loss:{}'.format(batch_loss))
- Run the evaluation function for each epoch with the test data in the same TensorFlow session that was created previously:
feed_dict = {x: x_test, y: y_test}
accuracy_score = tfs.run(accuracy_f, feed_dict=feed_dict)
print('epoch {0:04d} accuracy={1:.8f}'
.format(epoch, accuracy_score))
We get the following output:
epoch 0000 accuracy=0.73280001 epoch 0001 accuracy=0.72869998 epoch 0002 accuracy=0.74550003 epoch 0003 accuracy=0.75260001 epoch 0004 accuracy=0.74299997
There you go. We just trained our very first logistic regression model using TensorFlow for classifying handwritten digit images and got 74.3% accuracy.
Now, let's see how writing the same model in Keras makes this process even easier.