Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Deep Learning with TensorFlow

You're reading from   Deep Learning with TensorFlow Explore neural networks and build intelligent systems with Python

Arrow left icon
Product type Paperback
Published in Mar 2018
Publisher Packt
ISBN-13 9781788831109
Length 484 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Giancarlo Zaccone Giancarlo Zaccone
Author Profile Icon Giancarlo Zaccone
Giancarlo Zaccone
Md. Rezaul Karim Md. Rezaul Karim
Author Profile Icon Md. Rezaul Karim
Md. Rezaul Karim
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with Deep Learning FREE CHAPTER 2. A First Look at TensorFlow 3. Feed-Forward Neural Networks with TensorFlow 4. Convolutional Neural Networks 5. Optimizing TensorFlow Autoencoders 6. Recurrent Neural Networks 7. Heterogeneous and Distributed Computing 8. Advanced TensorFlow Programming 9. Recommendation Systems Using Factorization Machines 10. Reinforcement Learning Other Books You May Enjoy Index

Deep learning frameworks

In this section, we present some of the most popular DL frameworks. In short, almost all of the libraries provide the possibility of using the graphics processor to speed up the learning process, are released under an open license, and are the result of university research groups.

TensorFlow is mathematical software, and an open source software library, written in Python and C++ for machine intelligence. The Google Brain Team developed it in 2011, and it can be used to help us analyze data, to predict an effective business outcome. Once you have constructed your neural network model, after the necessary feature engineering, you can simply perform the training interactively using plotting or TensorBoard.

The main features offered by the latest release of TensorFlow are faster computing, flexibility, portability, easy debugging, a unified API, transparent use of GPU computing, easy use and extensibility. Other benefits include the fact that it is widely used, supported, and is production-ready at scale.

Keras is a deep-learning library that sits atop TensorFlow and Theano, providing an intuitive API, which is inspired by Torch (perhaps the best Python API in existence). Deeplearning4j relies on Keras as its Python API and imports models from Keras, and through Keras, from Theano and TensorFlow.

François Chollet, a software engineer at Google, created Keras. It runs seamlessly on CPU and GPU. This allows for easy and fast prototyping through user friendliness, modularity, and extensibility. Keras is probably one of the fastest growing frameworks, because it is too easy to construct NN layers. Therefore, Keras is likely to become the standard Python API for NNs.

Theano is probably the most widespread library. Theano is written in Python, which is one of the most widely used languages in the field of ML (Python is also used in TensorFlow). Moreover, Theano allows the use of GPU, which is 24x faster than a single CPU. Theano lets you efficiently define, optimize, and evaluate complex mathematical expressions, such as multidimensional arrays. Unfortunately, Yoshua Bengio announced on 28th September 2017, that development on Theano would cease. That means Theano is effectively dead.

Neon is a Python-based deep learning framework developed by Nirvana. Neon has a syntax similar to Theano's high-level framework (for example, Keras). Currently, Neon is considered the fastest tool for GPU-based implementation, especially for CNN. Although it's CPU-based implementation is relatively worse than most other libraries.

Torch is a vast ecosystem for ML that offers a large number of algorithms and functions, including for DL and for processing various types of multimedia data, with a particular focus on parallel computing. It provides an excellent interface for the C language and has a large community of users. Torch is a library that extends the scripting language Lua and is intended to provide a flexible environment for designing and training ML systems. Torch is a self-contained and highly portable framework on various platforms (Windows, Mac, Linux, and Android) and scripts can run on these platforms without modification. Torch provides many uses for different applications.

Caffe, developed primarily by Berkeley Vision and Learning Center (BVLC), is a framework designed to stand out because of its expression, speed, and modularity. Its unique architecture encourages application and innovation, by allowing an easier transition from CPU to GPU calculations. The large community of users means that considerable development has occurred recently. It is written in Python, but the installation process can be long, due to the numerous support libraries it has to compile.

MXNet is a DL framework that supports many languages, such as R, Python, C++, and Julia. This is helpful because if you know any of these languages, you will not need to step out of your comfort zone at all to train your DL models. Its backend is written in C++ and CUDA and it is able to manage its own memory in a similar way to Theano.

MXNet is also popular because it scales very well and can work with multiple GPUs and computers, which makes it very useful for enterprise. This is why Amazon has made MXNet its reference library for DL. In November 2017, AWS announced the availability of ONNX-MXNet, which is an open source Python package used to import Open Neural Network Exchange (ONNX) DL models into Apache MXNet.

The Microsoft Cognitive Toolkit (CNTK) is a unified DL toolkit from Microsoft Research that makes it easy to train and combine popular model types across multiple GPUs and servers. CNTK implements highly efficient CNN and RNN training for speech, image, and text data. It supports cuDNN v5.1 for GPU acceleration. CNTK also supports Python, C++, C#, and command-line interface.

Here is a table summarizing these frameworks:

Framework

Supported programming languages

Training materials community

CNN modeling capability

RNN modeling capability

Usability

Multi-GPU support

Theano

Python, C++

++

Ample CNN tutorials and prebuilt models

Ample RNN tutorials and prebuilt models

Modular architecture

No

Neon

Python,

+

Fastest tools for CNN

Minimal resources

Modular architecture

No

Torch

Lua, Python

+

Minimal resources

Ample RNN tutorials and prebuilt models

Modular architecture

Yes

Caffe

C++

++

Ample CNN tutorials and prebuilt models

Minimal resources

Creating layers

takes time

Yes

MXNet

R, Python, Julia, Scala

++

Ample CNN tutorials and prebuilt models

Minimal resources

Modular architecture

Yes

CNTK

C++

+

Ample CNN tutorials and prebuilt models

Ample RNN tutorials and prebuilt models

Modular architecture

Yes

TensorFlow

Python, C++

+++

Ample RNN tutorials and prebuilt models

Ample RNN tutorials and prebuilt models

Modular architecture

Yes

DeepLearning4j

Java, Scala

+++

Ample RNN tutorials and prebuilt models

Ample RNN tutorials and prebuilt models

Modular architecture

Yes

Keras

Python

+++

Ample RNN tutorials and prebuilt models

Ample RNN tutorials and prebuilt models

Modular architecture

Yes

Apart from the preceding libraries, there are some recent initiatives for DL on the cloud. The idea is to bring DL capability to big data, with billions of data points and high dimensional data. For example, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform and NVIDIA GPU Cloud (NGC) all offer machine and deep learning services (http://searchbusinessanalytics.techtarget.com/feature/Machine-learning-platforms-comparison-Amazon-Azure-Google-IBM) that are native to their public clouds.

In October 2017, AWS released Deep Learning AMIs (Amazon Machine Images) for Amazon Elastic Compute Cloud (EC2) P3 Instances. These AMIs come pre-installed with deep learning frameworks, such as TensorFlow, Gluon and Apache MXNet, that are optimized for the NVIDIA Volta V100 GPUs within Amazon EC2 P3 instances. The deep learning service currently offers three types of AMIs: Conda AMI, Base AMI and AMI with Source Code.

The Microsoft Cognitive Toolkit is Azure's open source, deep learning service. Similar to AWS' offering, it focuses on tools that can help developers build and deploy deep learning applications. The toolkit is installed in Python 2.7, in the root environment. Azure also provides a model gallery (https://www.microsoft.com/en-us/cognitive-toolkit/features/model-gallery/) that includes resources, such as code samples, to help enterprises get started with the service.

On the other hand, NGC empowers AI scientists and researchers with GPU-accelerated containers (see https://www.nvidia.com/en-us/data-center/gpu-cloud-computing/). The NGC features containerized deep learning frameworks such as TensorFlow, PyTorch, and MXNet that are tuned, tested, and certified by NVIDIA to run on the latest NVIDIA GPUs on participating cloud service providers. Nevertheless, there are also third-party services available through their respective marketplaces.

You have been reading a chapter from
Deep Learning with TensorFlow - Second Edition
Published in: Mar 2018
Publisher: Packt
ISBN-13: 9781788831109
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image