Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mobile Deep Learning with TensorFlow Lite, ML Kit and Flutter

You're reading from   Mobile Deep Learning with TensorFlow Lite, ML Kit and Flutter Build scalable real-world projects to implement end-to-end neural networks on Android and iOS

Arrow left icon
Product type Paperback
Published in Apr 2020
Publisher Packt
ISBN-13 9781789611212
Length 380 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Rimjhim Bhadani Rimjhim Bhadani
Author Profile Icon Rimjhim Bhadani
Rimjhim Bhadani
Anubhav Singh Anubhav Singh
Author Profile Icon Anubhav Singh
Anubhav Singh
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Introduction to Deep Learning for Mobile 2. Mobile Vision - Face Detection Using On-Device Models FREE CHAPTER 3. Chatbot Using Actions on Google 4. Recognizing Plant Species 5. Generating Live Captions from a Camera Feed 6. Building an Artificial Intelligence Authentication System 7. Speech/Multimedia Processing - Generating Music Using AI 8. Reinforced Neural Network-Based Chess Engine 9. Building an Image Super-Resolution Application 10. Road Ahead 11. Other Books You May Enjoy Appendix

Understanding machine learning and deep learning

It is important to understand a few key concepts of machine learning and deep learning before you are able to work on solutions that are inclusive of the technologies and algorithms associated with the domain of AI. When we talk about the current state of AI, we often mean systems where we are able to churn a huge amount of data to find patterns and make predictions based on those patterns. 

While the term "artificial intelligence" might bring up images of talking humanoid robots or cars that drive by themselves to a layman, to a person studying the field, the images might instead be in the form of graphs and networks of interconnected computing modules. 

In the next section, we will begin with an introduction to machine learning.

Understanding machine learning

In the year 1959, Arthur Samuel coined the term machine learning. In a gentle rephrasing of his definition of machine learning, the field of computer science that enables machines to learn from past experiences and produce predictions based on them when provided with unknown input is called machine learning.

A more precise definition of machine learning can be stated as follows:

  • A computer program that improves its performance, P, on any task, T, by learning from its experience, E, regarding task T, is called a machine learning program.
  • Using the preceding definition, in an analogy that is common at the moment, T is a task related to prediction, while P is the measure of accuracy achieved by a computer program while performing the task, T, based upon what the program was able to learn, and the learning is called E. With the increase of E, the computer program makes better predictions, which means that P is improved because the program performs task T with higher accuracy.
  • In the real world, you might come across a teacher teaching a pupil to perform a certain task and then evaluating the skill of the pupil at performing the task by making the pupil take an examination. The more training that the pupil receives, the better they will be able to perform the task, and the better their score will be in the examination.

In the next section, let's try to understand deep learning.

Understanding deep learning

We have been hearing the term learning for a long time, and in several contexts where it usually means gaining experience at performing a task. However, what would deep mean when prefixed to "learning"?

In computer science, deep learning refers to a machine learning model that has more than one layer of learning involved. What this means is that the computer program is composed of multiple algorithms through which the data passes one by one to finally produce the desired output.

Deep learning systems are created using the concept of neural networks. Neural networks are compositions of layers of neurons connected together such that data passes from one layer of neurons to another until it reaches the final or the output layer. Each layer of neurons gets data input in a form that may or may not be the same as the form in which the data was initially provided as input to the neural network.

Consider the following diagram of a neural network:

A few terms are introduced in the preceding screenshot. Let's discuss each one of them briefly. 

The input layer

The layer that holds the input values is called the input layer. Some argue that this layer is not actually a layer but only a variable that holds the data, and hence is the data itself, instead of being a layer. However, the dimensions of the matrix holding the layer are important and must be defined correctly for the neural network to communicate to the first hidden layer; therefore, it is conceptually a layer that holds data.

The hidden layers

Any layer that is an intermediary between the input layer and the output layer is called a hidden layer. A typical neural network used in production environments may contain hundreds of input layers. Often, hidden layers contain a greater number of neurons than either the input or the output layer. However, in some special circumstances, this might not hold true. Having a greater number of neurons in the hidden layers is usually done to process the data in a dimension other than the input. This allows the program to reach insights or patterns that may not be visible in the data in the format it is present in when the user feeds it into the network.

The complexity of a neural network is directly dependent on the number of layers of neurons in the network. While a neural network may discover deeper patterns in the data by adding more layers, it also adds to the computational expensiveness of the network. It is also possible that the network passes into an erroneous state called overfitting. On the contrary, if the network is too simple, or, in other words, is not adequately deep, it will reach another erroneous state called underfitting. 

The output layer

The final layer in which the desired output is produced and stored is called the output layer. This layer often corresponds to the number of desired output categories or has a single neuron holding the desired regression output.

The activation function

Each layer in the neural network undergoes the application of a function called the activation function. This function plays the role of keeping the data contained inside neurons within a normalized range, which would otherwise grow too large or too small and lead to errors in the computation relating to the handling of large decimal coefficients or large numbers in computers. Additionally, it is the activation function that enables the neural network to handle the non-linearity of patterns in data.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime