Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Interpretation of Functional APIs in Deep Neural Networks by Rowel Atienza

Save for later
  • 6 min read
  • 16 Mar 2019

article-image

Deep neural networks have shown excellent performance in terms of classification accuracy on more challenging established datasets like ImageNet, CIFAR10, and CIFAR100.  This article is an excerpt taken from the book Advanced Deep Learning with Keras authored by Rowel Atienza.

This book covers advanced deep learning techniques to create successful AI by using MLPs, CNNs, and RNNs as building blocks to more advanced techniques. You’ll also study deep neural network architectures, Autoencoders, Generative Adversarial Networks (GANs), Variational AutoEncoders (VAEs), and Deep Reinforcement Learning (DRL) critical to many cutting-edge AI results.

For conciseness, we’ll discuss two deep networks, ResNet and DenseNet. ResNet introduced the concept of residual learning that enabled it to build very deep networks by addressing the vanishing gradient problem in deep convolutional networks. DenseNet improved this technique further by having every convolution to have direct access to inputs, and lower layers feature maps. Furthermore, DenseNet managed to keep the number of parameters low in deep networks with the use of Bottleneck and Transition layers.

Numerous models such as ResNeXt and FractalNet have been inspired by the technique used by these two networks. With the understanding of ResNet and DenseNet, we can use their design guidelines to build our own models. By using transfer learning, we can also take advantage of pre-trained ResNet and DenseNet models for our purposes.

In this article, we’ll discuss an important feature of Keras called Functional API. This is an alternative method for building networks in Keras. Functional API enables us to build more complex networks that cannot be accomplished by a sequential model. Functional API is useful in building deep networks such as ResNet and DenseNet.

Functional API model in Keras


In the sequential model, a layer is stacked on top of another layer. Generally, the model is accessed through its input and output layers. There is no simple mechanism if we want to add an auxiliary input at the middle of the network or extract an auxiliary output before the last layer. Furthermore, the sequential model does not support graph-like models or models that behave like Python functions. It is also not straightforward to share layers between the two models. Such limitations are addressed by functional API.

Functional API is guided by the following concepts:

  • A layer is an instance that accepts a tensor as an argument. The output of a layer is another tensor. To build a model, layer instances are objects that are chained to one another through input and output tensors. This has a similar end-result as stacking multiple layers in the sequential model. However, using layer instances makes it easier for models to have auxiliary or multiple inputs and outputs since the input/output of each layer is readily accessible.
  • A model is a function between one or more input tensors and one or more output tensors. In between the model input and output, tensors are the layer instances that are chained to one another by layer input and output tensors. A model is, therefore, a function of one or more input layers and one or more output layers. The model instance formalizes the computational graph on how the data flows from input(s) to output(s).
  • Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at $19.99/month. Cancel anytime


After building the functional API model, training and evaluation are performed by the same functions used in the sequential model. To illustrate, in functional API, a 2D convolutional layer, Conv2D, with 32 filters and with x as the layer input tensor and y as the layer output tensor can be written as:

y = Conv2D(32)(x)


We can stack multiple layers to build our models. For example, we can rewrite the CNN on MNIST code as shown in Listing 2.1.1.

Listing 2.1.1 cnn-functional-2.1.1.py: Converting cnn-mnist-1.4.1.py code using functional API:

import numpy as np
from keras.layers import Dense, Dropout, Input
from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.models import Model
from keras.datasets import mnist
from keras.utils import to_categorical
 
 
# load MNIST dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()
 
# from sparse label to categorical
num_labels = np.amax(y_train) + 1
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
 
# reshape and normalize input images
image_size = x_train.shape[1]
x_train = np.reshape(x_train,[-1, image_size, image_size, 1])
x_test = np.reshape(x_test,[-1, image_size, image_size, 1])
x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255
 
# network parameters
input_shape = (image_size, image_size, 1)
batch_size = 128
kernel_size = 3
filters = 64
dropout = 0.3
 
# use functional API to build cnn layers
inputs = Input(shape=input_shape)
y = Conv2D(filters=filters,
       	kernel_size=kernel_size,
           activation='relu')(inputs)
y = MaxPooling2D()(y)
y = Conv2D(filters=filters,
       	kernel_size=kernel_size,
       	activation='relu')(y)
y = MaxPooling2D()(y)
y = Conv2D(filters=filters,
       	kernel_size=kernel_size,
       	activation='relu')(y)
# image to vector before connecting to dense layer
y = Flatten()(y)
# dropout regularization
y = Dropout(dropout)(y)
outputs = Dense(num_labels, activation='softmax')(y)
 
# build the model by supplying inputs/outputs
model = Model(inputs=inputs, outputs=outputs)
# network model in text
model.summary()
 
# classifier loss, Adam optimizer, classifier accuracy
model.compile(loss='categorical_crossentropy',
          	optimizer='adam',
          	metrics=['accuracy'])
 
# train the model with input images and labels
model.fit(x_train,
      	y_train,
      	validation_data=(x_test, y_test),
      	epochs=20,
      	batch_size=batch_size)
 
# model accuracy on test dataset
score = model.evaluate(x_test, y_test, batch_size=batch_size)
print("\nTest accuracy: %.1f%%" % (100.0 * score[1]))


By default, MaxPooling2D uses pool_size=2, so the argument has been removed.

In Listing 2.1.1, every layer is a function of a tensor. Every layer generates a tensor as output which becomes the input to the next. To create the model, we can call Model() and supply the inputs and outputs tensors or lists of tensors. Everything else is the same. The model in Listing 2.1.1 can be trained and evaluated using fit() and evaluate() functions similar to the sequential model. The sequential class is, in fact, a subclass of Model class. Please note that we inserted the validation_data argument in the fit() function to see the progress of validation accuracy during training. The accuracy ranges from 99.3% to 99.4% in 20 epochs.

To learn how to create a model with two inputs and one output you can head over to the book.

In this article, we have touched base with an important feature of Keras, the functional API model. We simply covered the necessary materials needed to build deep networks like ResNet and DenseNet. To learn more about the function API model and Keras in deep learning, you can explore the book Advanced Deep Learning with Keras by Rowel Atienza.

Build a Neural Network to recognize handwritten numbers in Keras and MNIST

Train a convolutional neural network in Keras and improve it with data augmentation [Tutorial]

Generative Adversarial Networks: Generate images using Keras GAN [Tutorial]