Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with R Cookbook

You're reading from   Deep Learning with R Cookbook Over 45 unique recipes to delve into neural network techniques using R 3.5.x

Arrow left icon
Product type Paperback
Published in Feb 2020
Publisher Packt
ISBN-13 9781789805673
Length 328 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Swarna Gupta Swarna Gupta
Author Profile Icon Swarna Gupta
Swarna Gupta
Rehan Ali Ansari Rehan Ali Ansari
Author Profile Icon Rehan Ali Ansari
Rehan Ali Ansari
Dipayan Sarkar Dipayan Sarkar
Author Profile Icon Dipayan Sarkar
Dipayan Sarkar
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Understanding Neural Networks and Deep Neural Networks 2. Working with Convolutional Neural Networks FREE CHAPTER 3. Recurrent Neural Networks in Action 4. Implementing Autoencoders with Keras 5. Deep Generative Models 6. Handling Big Data Using Large-Scale Deep Learning 7. Working with Text and Audio for NLP 8. Deep Learning for Computer Vision 9. Implementing Reinforcement Learning 10. Other Books You May Enjoy

Functional API

Keras's functional API gives us more flexibility when it comes to building complex models. We can create non-sequential connections between layers, multiple inputs/outputs models, or models with shared layers or models that reuse layers. 

How to do it...

In this section, we will use the same simulated dataset that we created in the previous section of this recipe, Sequential API. Here, we will create a multi-output functional model:

  1. Let's start by importing the required library and create an input layer:
library(keras)

# input layer
inputs <- layer_input(shape = c(784))
  1. Next, we need to define two outputs:
predictions1 <- inputs %>%
layer_dense(units = 8)%>%
layer_activation('relu') %>%
layer_dense(units = 1,name = "pred_1")

predictions2 <- inputs %>%
layer_dense(units = 16)%>%
layer_activation('tanh') %>%
layer_dense(units = 1,name = "pred_2")
  1. Now, we need to define a functional Keras model:
model_functional = keras_model(inputs = inputs,outputs = c(predictions1,predictions2))

Let's look at the summary of the model:

summary(model_functional)

The following screenshot shows the model's summary:

  1. Now, we compile our model:
model_functional %>% compile(
loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error")
)
  1. Next, we need to train the model and visualize the model's parameters:
history_functional <- model_functional %>% fit(
x_data,
list(y_data,y_data),
epochs = 30,
batch_size = 128,
validation_split = 0.2
)

Now, let's plot the model loss for the training and validation data of prediction 1 and prediction 2:

# Plot the model loss of the prediction 1 training data
plot(history_functional$metrics$pred_1_loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l")

# Plot the model loss of the prediction 1 validation data
lines(history_functional$metrics$val_pred_1_loss, col="green")

# Plot the model loss of the prediction 2 training data
lines(history_functional$metrics$pred_2_loss, col="red")

# Plot the model loss of the prediction 2 validation data
lines(history_functional$metrics$val_pred_2_loss, col="black")

# Add legend
legend("topright", c("training loss prediction 1","validation loss prediction 1","training loss prediction 2","validation loss prediction 2"), col=c("blue", "green","red","black"), lty=c(1,1))

The following plot shows the training and validation loss for both prediction 1 and prediction 2:

Now, let's plot the mean absolute error for the training and validation data of prediction 1 and prediction 2:

# Plot the model mean absolute error of the prediction 1 training data
plot(history_functional$metrics$pred_1_mean_absolute_error, main="Mean Absolute Error", xlab = "epoch", ylab="error", col="blue", type="l")

# Plot the model mean squared error of the prediction 1 validation data
lines(history_functional$metrics$val_pred_1_mean_absolute_error, col="green")

# Plot the model mean squared error of the prediction 2 training data
lines(history_functional$metrics$pred_2_mean_absolute_error, col="red")

# Plot the model mean squared error of the prediction 2 validation data
lines(history_functional$metrics$val_pred_2_mean_absolute_error, col="black")

# Add legend
legend("topright", c("training mean absolute error prediction 1","validation mean absolute error prediction 1","training mean absolute error prediction 2","validation mean absolute error prediction 2"), col=c("blue", "green","red","black"), lty=c(1,1))

The following plot shows the mean absolute errors for prediction 1 and prediction 2:

How it works...

To create a model using the functional API, we need to create the input and output layers independently, and then pass them to the keras_model() function in order to define the complete model. In the previous section, we created a model with two different output layers that share an input layer/tensor.

In step 1, we created an input tensor using the layer_input() function, which is an entry point into a computation graph that's been generated by the keras model. In step 2, we defined two different output layers. These output layers have different configurations; that is, activation functions and the number of perceptron units. The input tensor flows through these and produces two different outputs.

In step 3, we defined our model using the keras_model() function. It takes two arguments: inputs and outputs. These arguments specify which layers act as the input and output layers of the model. In the case of multi-input or multi-output models, you can use a vector of input layers and output layers, as shown here:

keras_model(inputs= c(input_layer_1, input_layer_2), outputs= c(output_layer_1, output_layer_2))

After we configured our model, we defined the learning process, trained our model, and visualized the loss and accuracy metrics. The compile() and fit() functions, which we used in steps 4 and 5, were described in detail in the How it works section of the Sequential API recipe.

There's more...

You will come across scenarios where you'll want the output of one model in order to feed it into another model alongside another input. The layer_concatenate() function can be used to do this. Let's define a new input that we will concatenate with the predictions1 output layer we defined in the How to do it section of this recipe and build a model:

# Define new input of the model 
new_input <- layer_input(shape = c(5), name = "new_input")

# Define output layer of new model
main_output <- layer_concatenate(c(predictions1, new_input)) %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dense(units = 1, activation = 'sigmoid', name = 'main_output')

# We define a multi input and multi output model
model <- keras_model(
inputs = c(inputs, new_input),
outputs = c(predictions1, main_output)
)

 We can visualize the summary of the model using the summary() function.

It is good practice to give different layers unique names while working with complex models.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime