Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Artificial Intelligence for IoT Cookbook
Artificial Intelligence for IoT Cookbook

Artificial Intelligence for IoT Cookbook: Over 70 recipes for building AI solutions for smart homes, industrial IoT, and smart cities

eBook
$20.98 $29.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Artificial Intelligence for IoT Cookbook

Handling Data

The technique used to collect data often determines the type of models that can be utilized. If a seismograph only reported the current reading of seismic activity once an hour, it would be meaningless. The data would not be high fidelity enough to predict earthquakes. The job of a data scientist in an IoT project does not start after the data is collected but rather, the data scientist needs to be part of the building of the device. When a device is built, the data scientist needs to determine whether the device is emitting the type of data that is appropriate for machine learning. Next, the data scientist helps the electrical engineer determine whether the sensors are in the right places and whether there is a correlation between sensors, and finally, the data scientist needs to store data in a way that is efficient to perform analytics. By doing so, we avoid the first major pitfall of IoT, which is collecting and storing data that is, in the end, useless for machine learning.

This chapter examines storing, collecting, and analyzing data to ensure that there is enough data to perform effective and efficient machine learning. We are going to start by looking at how data is stored and accessed. Then, we are going to look at data collection design to ensure that the data coming off the device is feasible for machine learning.

This chapter will cover the following recipes:

  • Storing data for analysis using Delta Lake
  • Data collection design
  • Windowing
  • Exploratory factor analysis
  • Implementing analytic queries in Mongo/hot path storage
  • Ingesting IoT data into Spark

Storing data for analysis using Delta Lake

Today, there are many options for dealing with data for analysis. You can store it in a data lake, Delta Lake, or a NoSQL database. This recipe covers data storage and retrieval and using Delta Lake. Delta Lake provides the fastest way to work with data and the most efficient way to store data. It also allows you to look at data as it existed at any given time in the past.

Getting ready

While Delta Lake is an open source project, the easiest way to store files in Delta Lake is through Databricks. The setup of Databricks was discussed in Chapter 1, Setting Up the IoT and AI Environment. This recipe assumes you have Databricks set up and running.

How to do it...

Importing files into Delta Lake is easy. Data can be imported through files or streaming. The steps for this recipe are as follows:

  1. In Databricks, open the data panel by clicking on the Data button, click on the Add Data button, and drag your file into the Upload section.
  2. Click on Create Table in Notebook. The code generated for you will start with this:
# File location and type
file_location = "/FileStore/tables/soilmoisture_dataset.csv"
file_type = "csv"

# CSV options
infer_schema = "false"
first_row_is_header = "false"
delimiter = ","

df = spark.read.format(file_type) \
.option("inferSchema", infer_schema) \
.option("header", first_row_is_header) \
.option("sep", delimiter) \
.load(file_location)

display(df)
  1. Review the data and when you are ready to save to Delta Lake, uncomment the last line:
# df.write.format("parquet").saveAsTable(permanent_table_name)
  1. Then, change "parquet" to "delta":
df.write.format("delta").saveAsTable(permanent_table_name)
  1. From here, query the data:
%sql
SELECT * FROM soilmoisture
  1. Alternatively, you can optimize how Delta Lake saves the file, making querying faster:
%sql
OPTIMIZE soilmoisture ZORDER BY (deviceid)

Delta Lake data can be updated, filtered, and aggregated. In addition, it can be turned into a Spark or Koalas DataFrame easily.

How it works...

Delta Lake is built on top of Parquet. Utilizing columnar compression and metadata storage, it can make the retrieval of data 10 times faster than standard ParquetIn addition to faster performance, Delta Lake's data versioning allows data scientists to look at how data was at a particular time, allowing data scientists to perform root cause analysis when their models drift.

Data collection design

The single most important factor in machine learning and IoT is data collection design. If the data collected is garbage datathen no machine learning can be done on top of it. Suppose you are looking at vibrations of a pump (shown in the following graph) to determine whether the pump is having issues with its mechanics or ball bearings so that preventive maintenance can be performed before serious damage is done to the machine:

Importantly, real-time data at 100 Hz is prohibitively expensive to store in the cloud. To keep costs down, engineers often send data at frequencies of 1 minute. Low-frequency sensor data often cannot accurately represent the issue that is being looked at. The next chart shows how the data looks when only sampled once per minute:

Here, we see vibrometer data overlaid with the data that is being collected in 1-minute intervals. The data has some use but it is not accurate as it does not show the true magnitude of what is going on with the data. Using the mean is worse. The following chart shows the average reading of the vibrometer's mean over 1 minute:

Taking the average reading windowed over 1 minute is an even worse solution because the average value is not changing when there is a problem with the pump. The following chart shows the vibrometer's standard reading over 1 minute:

Using a standard deviation technique shows variance compared to the mean to determine whether there is an issue with the pump. This is a more accurate solution over the average technique.

Using minimum and maximum windowed over a 1-minute window can present the best representation of the magnitude of the situation. The following chart shows what the reading will look like:

Because IoT machines can work correctly for years before having issues and forwarding high-frequency data in the cloud is cost-prohibitive, other measurements are used to determine whether the device needs maintenanceTechniques such as min/max, standard deviation, or spikes can be used to trigger a cloud-to-device message telling the device to send data at a much higher frequency. High-frequency diagnostic data can use blob storage to store large files.

One of the challenges of IoT is finding meaningful data in a sea of data. In this recipe, we shall demonstrate techniques to mine for valuable data.

Getting ready

To get ready for data collection design, you will need a device streaming data at a high rate. In Chapter 1, Setting Up the IoT and AI Environment, we discussed getting a device streaming data into IoT Hub. Often in production, device data is sent in intervals of 15 seconds or 1 minute. But for data collection design, one device is sending data at a high rate of 10 Hz, or 10 times a second. Once that data is flowing in, you can pull it into Databricks for real-time analysis.

How to do it...

In our Databricks notebooks, we will analyze of the IoT data using techniques such as variance, Z-Spikes, and min/max.

Variance

Variance is the measure of how much the data varies from the mean. In the code that follows, we are using Koalas, a distributed clone of pandas, to do our basic data engineering tasks, such as determining variance. The following code uses standard deviation over a rolling window to show data spike issues:

import databricks.koalas as ks 

df = ks.DataFrame(pump_data)
print("variance: " + str(df.var()))
minuite['time'] = pd.to_datetime(minuite['time'])
minuite.set_index('time')
minuite['sample'] = minuite['sample'].rolling(window=600,center=False).std()
Duty cycles are used on IoT product lines before enough data is collected for machine learning. They are often simple measures, such as whether the device is too hot or there are too many vibrations.

We can also look at high and low values such as maximum to show whether the sensor is throwing out appropriate readings. The following code shows the maximum reading of our dataset:

max = DF.agg({"averageRating": "max"}).collect()[0]

Z-Spikes

Spikes can help determine whether there is an issue by looking at how rapidly a reading is changing. For example, an outdoor IoT device may have a different operating temperature in the South Pole compared to one in direct sun in Death Valley. One way of finding out whether there is an issue with the device is by looking at how fast the temperature is changing. Z-Spikes are a typical time-based anomaly detection. It is used because it only looks at that device's readings and can give a value independent of environmental factors.

Z-Spikes look at how the spike differs from the standard deviation. They use a statistical z-test to determine whether a spike is greater than 99.5% of the population.

Min/max

Mins and maxes can show the value that shows the most stress on the system. The following code shows how to get the min and max values of a 1-minute window:

minute['max'] = minute['sample'].rolling(window=600,center=False).max() 

minute['sample'] = minute['sample'].rolling(window=600,center=False).min()

Minimum and maximum values can emphasize outliers. This can be useful in determining anomalies. 

Windowing

There are three primary windowing functions: tumbling, hopping, and sliding. Both Spark and Stream Analytics can do windowing. Windowing allows you to look at aggregate functions such as average, count, and sum. It also allows you to look at minimum and maximum values. Windowing is a feature engineering technique to help make data more manageable. In this recipe, we are going to cover several tools for windowing and the ways to window.

Getting ready

To get ready, you will also need a device streaming data to IoT Hub. That stream will need to be ingested by either Azure's Stream Analytics, Spark, or Databricks.

How to do it...

Utilize a Databricks notebook or Stream Analytics workspace to perform the recipes. Windowing turns the static of large datasets into meaningful features of your machine learning model.

Tumbling

Tumbling window functions group data streams into time segments (as shown in the following diagram). Tumbling windows means that the window does not repeat or overlap data from one segment waterfall into the next:

Stream Analytics

In Stream Analytics, one way to use a tumbling window to count the events that happen every 10 seconds would be to do the following:

SELECT EventTime, Count(*) AS Count
FROM DeviceStream TIMESTAMP BY CreatedAt
GROUP by EventTime, TumbelingWindow(minuites, 10)

Spark 

In Spark, to do the same count of events happening every 10 minutes, you would do the following:

from pyspark.sql.functions import * 
windowedDF
= eventsDF.groupBy(window("eventTime", "10 minute")).count()

Hopping

Hopping windows are tumbling windows that overlap. They allow you to set specific commands and conditions, such as every 5 minutes, give me the count of the sensor readings over the last 10 minutes. To make a hopping window the same as a tumbling window, you would make the hop size the same as the window size, as shown in the following diagram:

Stream Analytics

The following Stream Analytics example shows a count of messages over a 10-minute window. This count happens every 5 minutes:

SELECT EventTime, Count(*) AS Count
FROM DeviceStream TIMESTAMP BY CreatedAt
GROUP by EventTime, HopingWindow(minuites, 10, 5)

Spark

In PySpark, this would be done through a window function. The following example shows a Spark DataFrame that is windowed, producing an entry in a new entry in a DataFrame for every 5 minutes spanning a 10-minute period:

from pyspark.sql.functions import * 
windowedDF
= eventsDF.groupBy(window("eventTime", "10 minute", "5 minute")).count()

Sliding

Sliding windows produce an output when an event occurs. The following diagram illustrates this concept:

Stream Analytics

In the Stream Analytics example, by using a sliding window, we only receive a result when there are more than 100 messages over a 10-minute window. Unlike other methods that look at an exact window of time and show one message for that window, in sliding windows, we would receive a message on every input message. Another use of this would be to show a rolling average:

SELECT EventTime, Count(*) AS Count
FROM DeviceStream TIMESTAMP BY CreatedAt
GROUP by EventTime,
SlidingWindow(minutes, 10)
WHERE COUNT(*) > 100

How it works...

Using windowing, IoT data can show factors such as frequency, sums, standard deviation, and percentile distribution over a period of time. Windowing can be used to enrich the data with feature engineering or can transform the data into an aggregate dataset. Windowing, for example, can show how many devices were produced in a factory or show the modulation in a sensor reading.

Exploratory factor analysis

Garbage data is one of the key issues that plague IoT. Data is often not validated before it is collected. Often, there are issues with bad sensor placement or data that appears to be random because it is not an appropriate measure for the type of data being used. For example, a vibrometer may show, because of the central limit theorem, that the data is centered around the mean, whereas the data is actually showing a large increase in magnitude. To combat this, it is important to do exploratory factor analysis on the device data. 

In this recipe, we will explore several techniques of factor analysis. Aggregate data and raw telemetry data are used in Databricks notebooks to perform this analysis.

Getting ready

You will need to have data in a table in Databricks, which we implemented in the Storing data for analysis using Delta Lake recipe. Once data is in a Spark data table, it is ready for factor analysis.

How to do it...

This recipe is composed of two sections. The first is performing a visual inspection of data. Visual inspection can reveal software bugs, learn about how the device behaves, and determine device data patterns. The second part looks at correlation and co-variance. These techniques are often used to determine whether a sensor is redundant.

Visual exploration

Spark allows you to look at basic charts without much code. Using the magic symbol at the top of the notebook segment, you can change language easily from Python to Scala or SQL. One word of caution about using Databricks' built-in charting system is that it only looks at the first 10,000 records. For a large dataset, there are other charting libraries. The steps are as follows:

  1. Query the data in Databricks using the %sql magic, as shown:
%sql
select * from Telemetry
  1. Select the chart icon at the bottom of the returned data grid. It will bring up the chart builder UI, as shown in the following screenshot:

  1. Select the chart type that best represents the data. Some charts are better suited for variable comparison while others can help reveal trends.

The following section reviews when and why you would use different chart types.

Chart types

Different types of charts illuminate different aspects of the data, such as comparison, composition, relationship, and distribution. Relationship charts are used to test a hypothesis or look at how one factor affects other factors. Composition shows the percentage breakdown of a dataset. It is often used to show how factors compare against others. A pie chart is a simple composition chart. Distribution charts are used to show distributions of a population. They are often used to determine whether the data is random, has a large spread, or is normalized. Comparison charts are used to compare one value against others. 

Bar and column charts

Bar and column charts are used to make a comparison between items. Bar charts can have many items simply because of the page layout. Column and bar charts can also show change over time. The following chart is an example of a bar and column chart:

Scatter plot

Scatter plots can show the relationship between two variables. It also can show a trend line. The following is an example of a scatter plot:

Bubble charts

When you want to show the relationship between three variables, you can use bubble charts. This can be used to show anomalous behavior. The following is an example of a bubble chart:

Line charts

These charts show changes over time and can be used to show how a device's data changes over a day. If a device has seasonal data, you may need to include the time of day as part of the algorithm or use de-seasonal algorithms. The following is an example of a line chart:

Area charts

Area charts are like line charts but are used to show how the volume of one segment compares to another. The following is an example of an area chart:

Quantile plot
Help determine population shape by spitting data into segments (quantiles). Common quantiles are 25%, 50%, and 75%, or 33% and 66%, or 5% and 95% (percentages in general are quartiles). Understanding whether data is behaving within expected parameters is important in understanding whether a device is having problems. The following is an example of a quantile plot:

Redundant sensors

One of the challenges of IoT is determining where to place the sensors and how many sensors are needed. Take pumps, for example: one way of determining whether a pump'bearings are going out is to use a microphone to listen for a high-pitched squeal. Another way is to use a parameter to determine whether it is vibrating more. Yet another way is to measure the current and see whether it is fluctuating. There is no one right way to determine whether a pump's ball bearings are going out; however, implementing all three techniques may be cost-prohibitive and redundant. A common way of looking at the correlation between different sensors is using a heat map. In the following code, we use a heat map to find the correlation between sensors. In other words, we are looking for sensors that are transmitting redundant information:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns


# load the sample training data
train = pd.read_csv('/dbfs/FileStore/tables/Bike_train.csv')

for i in range(50):
a = np.random.normal(5,i+1,10)
b.append(a)
c = np.array(b)
cm =np.corrcoef(c)

plt.imshow(cm,interpolation='nearest')
plt.colorbar()

#heat map
plt.figure(figsize=(17,11))
sns.heatmap(train.iloc[:,1:30].corr(), cmap= 'viridis', annot=True)
display(plt.show())

The following screenshot shows the heat map:

In the preceding example, we can see that count and registered have a very high correlation because both numbers are close to 1. Similarly, we can see that temp and atemp have a high degree of correlation. Using this data without pruning out the corollary data can give a weighted effect to machine learning models training on the dataset.


When a device has very little data, it still may be valuable to perform analysis of variance, distribution, and deviation. Because it has a lower bar of entry than machine learning, it can be deployed at an earlier phase in the machine's life cycle. Doing statistical analysis helps ensure that the device is setting proper data that is not duplicated or false and can be used for machine learning. 

Cross-tabulation provides a table of the frequency distributions. This can be used to determine whether two different sensors are counting the same. The following is the code to display the cross-tabulation table:

display(DF.stat.crosstab("titleType", "genres"))

Sample co-variance and correlation

Co-variance measures the joint variability change of two sensors with respect to each other. Positive numbers would indicate that the sensors are reporting the same data. Negative numbers indicate that there is an inverse relationship between the sensors. The co-variance of two sensors can be calculated using the DataFrame stat.cov function in a Spark DataFrame:

df.stat.cov('averageRating', 'numVotes')

How it works...

Modifying physical devices after they have been produced can be costly. This recipe shows how to inspect a prototype device to make sure the data produced by it will not be meaningless. Using data analysis tools such as Databricks for preliminary data analysis can save us from issues that plague IoT and AI, such as bad sensor placement, under- or overcommunication, and data that is not usable for machine learning. Performing standard machine learning tasks such as predictive maintenance, anomaly detection, or remaining useful life is dependent on good data.

There's more...

You can further explore data by creating a filtering widget. For example, you could use the CREATE WIDGET DROPDOWN query as shown:

%sql
CREATE WIDGET DROPDOWN tytleType DEFAULT "movie" CHOICES SELECT DISTINCT titleType FROM imdbTitles

Creating a widget allows you to create a data query that can be easily segmented, as shown in the following code:

%sql
select * from imdbTitles where titleType = getArgument("tytleType")

Other widget types, such as text, combo box, and multi-select, also are available.

Implementing analytic queries in Mongo/hot path storage

In IoT architectures, there is hot and cold path data. Hot path data can be accessed immediately. This is typically stored in a NoSQL or time-series database. An example of this would be to use a time-series database such as InfluxDB to count the number of resets per device over the last hour. This could be used to aid in feature engineering. Another use of hot data is precision analysis. If a machine breaks in the field, a database such as MongoDB can be queried for just the data that that machine has generated over the last month.

Cold path data is typically used for batch processing, such as machine learning and monthly reports. Cold path data is primarily data stored in a blob, S3 storage, or HDFS-compliant data store. Separating a hot path from a cold path is usually a factor of cost and scalability. IoT data generally falls into the category of big data. If a data scientist queries years' worth of data from a NoSQL database, the web application that is using it can falter. The same is not true for data stored in the cold path on a disk. On the other hand, if the data scientist needs to query a few hundred records from billions of records, a NoSQL database would be appropriate.

This recipe is focused on working with hot data. This recipe's primary focus is on extracting IoT data from MongoDB. First, we extract data from one device, and then we will aggregate it across multiple devices.

Getting ready

Stream Analytics can get IoT data into MongoDB. To do this, start up MongoDB. This can be done through Azure Kubernetes Service or using the Atlas MongoDB cloud provider. Once you have a database, you can use a function app to move data between IoT Hub and MongoDB.

How to do it...

Mongo has a list of filtering options comparable to SQL. The following code shows how to connect to a local version of Mongo and query for all products with an inventory status of A:

df = spark.read.format("mongo").option("uri",
"mongodb://127.0.0.1/products.inventory").load()
pipeline = "{'deviceid':'8ea23889-3677-4ebe-80b6-3fee6e38a42c'}"
df = spark.read.format("mongo").option("pipeline", pipeline).load() df.show()

The next example shows how to do a complicated filter followed by a group by operation. It finally sums the information. The output will show the count of items with a status of A:

pipeline = "[ { '$match': { 'status': 'A' } }, { '$group': { '_id': '$item', 'total': { '$sum': '$qty' } } } ]"
df = spark.read.format("mongo").option("pipeline", pipeline).load()
df.show()

How it works...

Mongo stores indexed data on multiple computers or partitions. This allows retrieval of specific data to be done with latency times in milliseconds. NoSQL databases can provide fast lookup for data. In this recipe, we discussed how to query data from MongoDB into Databricks.

Ingesting IoT data into Spark

To connect Spark to IoT Hub, first, create a consumer group. A consumer group is a pointer to the current position in the journal that the consumer has reached. There can be multiple consumers on the same journal of data. The consumer group is paralyzed and distributable, enabling you to write programs that can remain stable even on a massive scale.

Getting ready

For this recipe, go into the Azure IoT Hub portal and click on the Build-in endpoints menu option. Then, add a consumer group by entering some text. While still on that screen, copy the Event Hub-compatible endpoint connection string.

How to do it...

The steps for this recipe are as follows:

  1. In Databricks, start a new notebook and enter the information needed to connect to IoT Hub. Then, enter the following code:
import datetime as dt
import json

ehConf = {}
ehConf['eventhubs.connectionString'] = ["The connection string you copies"]
ehConf['eventhubs.consumerGroup'] = "[The consumer group you created]"

startingEventPosition = {
"offset": -1,
"seqNo": -1, #not in use
"enqueuedTime": None, #not in use
"isInclusive": True
}

endingEventPosition = {
"offset": None, #not in use
"seqNo": -1, #not in use
"enqueuedTime": endTime,
"isInclusive": True
}
ehConf["eventhubs.recieverTimeout"] = 100
  1. Put the data into a Spark DataFrame:
df = spark \
.readStream \
.format("eventhubs") \
.options(**ehConf) \
.load()
  1. The next step is to apply a structure to the data so that you can use structured streaming:
from pyspark.sql.types import *
Schema = StructType([StructField("deviceEndSessionTime", StringType()), StructField("sensor1", StringType()),
StructField("sensor2", StringType()),
StructField("deviceId", LongType()),
])
  1. The final step is to apply the schema to a DataFrame. This allows you to work with the data as if it were a table:
from pyspark.sql.functions import *

rawData = df. \
selectExpr("cast(Body as string) as json"). \
select(from_json("json", Schema).alias("data")). \
select("data.*")

How it works...

In this recipe, we connected to IoT Hub and put the data into a DataFrame. Later, we added a structure to that frame, allowing us to query data similarly to the way we query a database table.

In the next few chapters, we will discuss how to create models. After creating models using cold path data, you can perform near-real-time machine learning on it by pushing those trained models into Databricks structured streaming.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover quick solutions to common problems that you’ll face while building smart IoT applications
  • Implement advanced techniques such as computer vision, NLP, and embedded machine learning
  • Build, maintain, and deploy machine learning systems to extract key insights from IoT data

Description

Artificial intelligence (AI) is rapidly finding practical applications across a wide variety of industry verticals, and the Internet of Things (IoT) is one of them. Developers are looking for ways to make IoT devices smarter and to make users’ lives easier. With this AI cookbook, you’ll be able to implement smart analytics using IoT data to gain insights, predict outcomes, and make informed decisions, along with covering advanced AI techniques that facilitate analytics and learning in various IoT applications. Using a recipe-based approach, the book will take you through essential processes such as data collection, data analysis, modeling, statistics and monitoring, and deployment. You’ll use real-life datasets from smart homes, industrial IoT, and smart devices to train and evaluate simple to complex models and make predictions using trained models. Later chapters will take you through the key challenges faced while implementing machine learning, deep learning, and other AI techniques, such as natural language processing (NLP), computer vision, and embedded machine learning for building smart IoT systems. In addition to this, you’ll learn how to deploy models and improve their performance with ease. By the end of this book, you’ll be able to package and deploy end-to-end AI apps and apply best practice solutions to common IoT problems.

Who is this book for?

If you’re an IoT practitioner looking to incorporate AI techniques to build smart IoT solutions without having to trawl through a lot of AI theory, this AI IoT book is for you. Data scientists and AI developers who want to build IoT-focused AI solutions will also find this book useful. Knowledge of the Python programming language and basic IoT concepts is required to grasp the concepts covered in this artificial intelligence book more effectively.

What you will learn

  • Explore various AI techniques to build smart IoT solutions from scratch
  • Use machine learning and deep learning techniques to build smart voice recognition and facial detection systems
  • Gain insights into IoT data using algorithms and implement them in projects
  • Perform anomaly detection for time series data and other types of IoT data
  • Implement embedded systems learning techniques for machine learning on small devices
  • Apply pre-trained machine learning models to an edge device
  • Deploy machine learning models to web apps and mobile using TensorFlow.js and Java
Estimated delivery fee Deliver to Russia

Economy delivery 10 - 13 business days

$6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 05, 2021
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781838981983
Category :
Languages :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Russia

Economy delivery 10 - 13 business days

$6.95

Premium delivery 6 - 9 business days

$21.95
(Includes tracking information)

Product Details

Publication date : Mar 05, 2021
Length: 260 pages
Edition : 1st
Language : English
ISBN-13 : 9781838981983
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 145.97
Artificial Intelligence for IoT Cookbook
$43.99
Practical Python Programming for IoT
$51.99
Learn Robotics Programming
$49.99
Total $ 145.97 Stars icon

Table of Contents

10 Chapters
Setting Up the IoT and AI Environment Chevron down icon Chevron up icon
Handling Data Chevron down icon Chevron up icon
Machine Learning for IoT Chevron down icon Chevron up icon
Deep Learning for Predictive Maintenance Chevron down icon Chevron up icon
Anomaly Detection Chevron down icon Chevron up icon
Computer Vision Chevron down icon Chevron up icon
NLP and Bots for Self-Ordering Kiosks Chevron down icon Chevron up icon
Optimizing with Microcontrollers and Pipelines Chevron down icon Chevron up icon
Deploying to the Edge Chevron down icon Chevron up icon
About Packt Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(10 Ratings)
5 star 90%
4 star 10%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




AMol SOlanki Dec 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you want to apply AI to your IOT application then this book would do the trick. Author has included multiple recipes for a variety of IOT apps. Author has done a great job by not sticking to one language instead the book contain solutions in multiple languages/environments. which is great when you want to solve real world problems.The step-by-step approach on how to do and how it works stands out. Author has put in extra efforts to explain each and every concepts.If you want to implement an IOT soln with AI then this book is great and must have resource.
Amazon Verified review Amazon
Barron Barnett Mar 13, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Disclaimer: I was given a copy to review and was asked by the author to do so. Like Sean I worked with Mike for 3 years helping numerous companies develop and accelerate their IoT solution. Like most engineering teams we all had our own focus, domain, and priorities that we had to trade, we understood the perspectives of each other. When issues did arise we spent most of our time educating each other and often realizing we had a trade to make, or we were walking a dead end path. My focus was embedded hardware, software, and edge solutions.Overall I’d say this book is a good at giving a person enough background to be dangerous. What do I mean, that sounds horrible right? What I mean is someone can gain the technical skills and rudimentary understanding but may not have the experience, institutional knowledge, or full understanding of the limitations. This problem doesn’t just apply to this book, it’s common everywhere and it a factor that comes with experience. I regularly deal with people who still look at AI/ML at times like a magical cure all.The book doesn’t sell AI/ML as a magic solution and covers a good majority of the basics with examples you can do yourself to see how they work. The problem is each of the sub parts of this could fill multiple volumes on their respective topics. For example the book does cover a preventative maintenance tutorial. What is missing from that is a lot of the statistics that go behind the validity of the model, probability in properly detecting as well as failures in detection given an initial training datasets size and the number of products to actually be put in the field. That subject alone is easily at least a few chapters in an advanced book, if not an entire book itself. This was a point we often would have to work with customers on to educate and help them determine the viability of getting a training dateset for the size of their fielded system.If you're getting into IoT this is absolutely a book to pick up to lay a foundation to start building on top of. It provides solid paths to avoid common issues that regularly occur in IoT solutions. Especially for understanding how to leverage the cloud in your solution.
Amazon Verified review Amazon
AT Apr 26, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Disclaimer: The publisher asked me to review this book & gave me a review copy. I will give my honest opinion about the book, the good and the less so.This book will give you great start if you want to learn to apply ML models to IOT applications.This book covers complete end-to-end IoT/ML life cycle. It pretty much covers every step, right from setting up the environment, choosing hardware to the deployment to the edge devices. It walks you through with hands on example code. This covers lot of different tools, techniques & great details on data handling part.Overall, I will highly recommend this book, if you want to start on building an IOT application. It is spot on with hands on recipes with different real world examples.
Amazon Verified review Amazon
Sean Kelly Mar 10, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Disclaimer: The author asked me to review this book and gave me a review copy. I worked with Mike, side by side, for more than 3 years as we helped hundreds of companies build IoT solutions based on the concepts, tools, and technologies covered in this book. While we didn’t always see eye to eye on how best to help our customers, we agreed far more often than we differed, and the healthy debate made us both better for it.This book’s biggest strength is also its biggest weakness. Building an IoT solution that leverages AI is an enormous task encompassing everything from electrical engineering, firmware development, network topologies, high availability cloud architectures, data science, UI/Ux, and even business models to make the whole thing profitable. The topic could easily fill multiple volumes. While Mike is focused on the AI part, some amount of the rest has to come into play so that the reader can understand how it all fits together.Overall, Mike did an excellent job. The recipes are solid and representative of the current state of the art. I would have liked if he had spent more time explaining some of the information that he mentions only in passing, but then the book would have been twice as big and probably felt less focused.If you are building an IoT solution and thinking about how to utilize AI, this book will pay for itself quickly in time saved by not running down deadends and avoiding common pitfalls.
Amazon Verified review Amazon
Murphy Choy Aug 26, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
For folks like me who loves IOT, this is a great recipe book for a variety of IOT applications. Easy to use, quick to see result.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela