Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with Hadoop

You're reading from   Deep Learning with Hadoop Distributed Deep Learning with Large-Scale Data

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781787124769
Length 206 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Dipayan Dev Dipayan Dev
Author Profile Icon Dipayan Dev
Dipayan Dev
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface 1. Introduction to Deep Learning FREE CHAPTER 2. Distributed Deep Learning for Large-Scale Data 3. Convolutional Neural Network 4. Recurrent Neural Network 5. Restricted Boltzmann Machines 6. Autoencoders 7. Miscellaneous Deep Learning Operations using Hadoop 1. References

Basic layers of CNN

A CNN is composed of a sequence of layers, where every layer of the network goes through a differentiable function to transform itself from one volume of activation to another. Four main types of layers are used to build a CNN: Convolutional layer, Rectified Linear Units layer, Pooling layer, and Fully-connected layer. All these layers are stacked together to form a full CNN.

A regular CNN could have the following architecture:

[INPUT - CONV - RELU - POOL - FC]

However, in a deep CNN, there are generally more layers interspersed between these five basic layers.

A classic deep neural network will have the following structure:

Input -> Conv->ReLU->Conv->ReLu->Pooling->ReLU->Conv->ReLu->Pooling->Fully Connected

AlexNet, as mentioned in the earlier section, can be taken as a perfect example for this kind of structure. The architecture of AlexNet is shown in Figure 3.4. After every layer, an implicit ReLU non-linearity has been added. We...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime