Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Automated Machine Learning on AWS
Automated Machine Learning on AWS

Automated Machine Learning on AWS: Fast-track the development of your production-ready machine learning applications the AWS way

Arrow left icon
Profile Icon Trenton Potgieter
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (10 Ratings)
Paperback Apr 2022 420 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Trenton Potgieter
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (10 Ratings)
Paperback Apr 2022 420 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. €18.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Automated Machine Learning on AWS

Chapter 1: Getting Started with Automated Machine Learning on AWS

If you have ever had the pleasure of successfully driving a production-ready Machine Learning (ML) application to completion or you are currently in the process of developing your first ML project, I am sure that you will agree with me when I say, "This is not an easy task!"

Why do I say that? Well, if we ignore the intricacies involved in gathering the right training data, analyzing and understanding that data, and then building and training the best possible model, I am sure you will agree that the ML process in itself is a complicated task process, time-consuming, and entirely manual, making it extremely difficult to automate. And it is these factors, plus many more, that contribute to ML tasks being difficult to automate.

The primary goal of this chapter is to emphasize these challenges by reviewing a practical example that sets the stage for why automating the ML process is difficult. This chapter will highlight what governing factors should be considered when performing this automation and how leveraging various Amazon Web Services (AWS) capabilities can make the task of driving ML projects into production less daunting and fully automated. By the end of this chapter, we will have established a common foundation for overcoming these challenges through automation.

Therefore, in this chapter, we will cover the following topics:

  • Overview of the ML process
  • Complexities in the ML process
  • An example of the end-to-end ML process
  • How AWS can make automating ML development and the deployment process easier

Technical requirements

You will need access to the Jupyter Notebook environment to follow along with the example in this chapter. Although sample code has been provided for the various steps of the ML process, a Jupyter Notebook example has been provided in this book's GitHub repository (https://github.com/PacktPublishing/Automated-Machine-Learning-on-AWS/blob/main/Chapter01/ML%20Process%20Example.ipynb) for you to work through the entire example at your own pace.

For further instructions on how to set up a Jupyter Notebook environment, you can refer to the installation guide (https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html) to either set up JupyterLab or classic Jupyter Notebook. Alternatively, for local notebook development using a development IDE, such as Visual Studio Code, you can refer to the VS Code documentation (https://code.visualstudio.com/docs/datascience/jupyter-notebooks).

Overview of the ML process

Unfortunately, there is no established how-to guide when performing ML. This is because every ML use case is unique and specific to the application that leverages the resultant ML model. Instead, there is a general process pattern that most data scientists, ML engineers, and ML practitioners follow. This process model is called the Cross-Industry Standard Process for Data Mining (CRISP-DM) and while not everyone follows the specific steps of the process verbatim, most production ML models have probably, in some shape or form, been built by using the guardrails that the CRISP-DM methodology provides.

So, when we refer to the ML process, we are invariably referring to the overall methodology of building production-ready ML models using the guardrails from CRSIP-DM.

The following diagram shows an overview of the CRISP-DM guidelines for creating a typical process that an ML practitioner might follow:

Figure 1.1 – Overview of a typical ML process

Figure 1.1 – Overview of a typical ML process

In a nutshell, the process starts with the ML practitioner being tasked with providing an ML model that addresses a specific business use case. The ML practitioner then finds, ingests, and analyzes an appropriate dataset that can be effectively leveraged to accomplish the goals of the ML project.

Once the data has been analyzed, the ML practitioner determines the most applicable modeling techniques that extract the most relevant information from the data to address the use case. These techniques include the following:

  1. Determining the most applicable ML algorithm
  2. Creating new aspects (engineering new features) of the data that can further improve the chosen model's overall effectiveness
  3. Separating the data into training and testing sets for model training and evaluation

The ML practitioner then codifies the algorithm's architecture and training/testing/evaluation routines. These routines are then executed to determine the best possible model parameters – ones that optimize the model to fit both the data and the business use case.

Finally, the best model is deployed into production to serve predictions that match the initial objective of the business use case.

As you can see, the overall process seems relatively straightforward and easy to follow. So, you may be wondering what all the fuss is about. For example, you may be asking yourself, Where is the complexity in this process? or Why do you say that this is so hard to automate?

While the process may look simplistic, the reality when executing it is vastly different. The following diagram provides a more realistic representation of what an ML practitioner may observe when developing an ML use case:

Figure 1.2 – Overview of a realistic ML process

Figure 1.2 – Overview of a realistic ML process

As you can see, the overall process is far more convoluted than the typical representation shown in Figure 1.1. There are potentially multiple different paths that can be taken through the process. Each course of action is based on the results captured from the previous step in the process. Additionally, taking a particular course of action may not always yield the desired results, thus forcing the ML practitioner to have to reset or go back and choose a different set of criteria that will hopefully produce a better result.

So, now that we have provided a high-level overview of what the typical ML process should entail, let's examine some of the complexities and challenges that make the ML process difficult.

Complexities in the ML process

Each iteration through the process is an experiment to see whether the changes that were made in a previous part of the process will yield a better result or a more optimized ML model. It is this process of iteration that makes the ML workflow hard and difficult to automate. The goal of each iteration or experiment is to improve the model's overall predictive capabilities. During each iteration, we fine-tune the parameters, discover new variables, and verify that these changes improve the overall accuracy of the model's prediction. Each experiment also provides further insight into where we are in the overall process and what the next steps might be. In essence, having to potentially go back and tweak a previous step or even go back to the very beginning of the process and start with a different set of data, parameters, or even a different ML model altogether is a manual process. But even unsuccessful experiments have value since they allow us to learn from our mistakes and hopefully steer us toward a successful outcome.

Note

Tolerating failures and not letting them derail the overall ML process is a key factor in any successful ML strategy.

So, if the overall process is complicated and executing the methodology yields failures, this will hopefully lead to a more successful outcome that will impact the overall ML strategy. It becomes noticeably clear why automating the entire process is challenging but necessary, as it now becomes a crucial part of the overall success criteria of any ML project.

Now that we have a good idea of what makes the ML process difficult, let's explore these challenges further by covering a practical example.

An example of the end-to-end ML process

To better illustrate that the overall ML process is hard and that automation is challenging but crucial, we will set the stage with a hands-on example use case.

Introducing ACME Fishing Logistics

ACME Fishing Logistics is a fictitious organization that's concerned with the overfishing of the Sea Snail or Abalone population. Their primary goal is to educate fishermen on how to determine whether an abalone is old enough for breeding. What makes the age determination process challenging is that to verify the abalone's age, it needs to be shucked so that the inside of the shell can be stained and then the number of rings can be counted through a microscope. This involves destroying the abalone to determine whether it is old enough to be kept or returned to the ocean. So, ACME's charter and the goal behind their website is to help fishermen evaluate the various physical characteristics of an abalone so that they can determine its age without killing it.

The case for ML

As you can probably imagine, ACME has not been incredibly successful in its endeavor to prevent abalone overfishing through a simple education process. The CTO has determined that a more proactive strategy must be implemented. Due to this, they have tasked the website manager to make use of ML to make a more accurate prediction of an abalone's age when fishermen enter the physical characteristics of their catch into the new Age Calculator module of the website. This is where you come in, as ACME's resident ML practitioner – it is your job to create the ML model that serves abalone age predictions to the new Age Calculator.

We can start by using the CRISP-DM guidelines and frame the business use case. The business use case is an all-encompassing step that establishes the overall framework and incorporates the individual steps of the CRISP-DM process.

The purpose of this stage of the process is to establish what the business goals are and to create a project plan that achieves these goals. This stage also includes determining the relevant criteria that define whether, from a business perspective, the project is deemed a success; for example:

  • Business Goal: The goal of this initiative is to create an Age Calculator web application that enables fishermen to determine the age of their abalone catch to determine whether it is below the breeding age threshold. To establish how this business goal can be achieved, several questions arise. For example, how accurate does the age prediction need to be? What evaluation metrics will be used to determine the prediction's accuracy? What is the acceptable accuracy threshold? Is there valid data for the use case? How long will the project take? Having questions like these helps set realistic goals for planning.
  • Project Plan: A project plan can be formulated by investigating what the answers to some of these questions might be. For example, by investigating what data to use and where to find it, we can start to formulate the difficulties in acquiring the data, which impacts how long the project might take. Additionally, understanding about the model's complexity, which also impacts project timelines, as more complicated models require more time to build, evaluate, and tweak.
  • Success Criteria: As the project plan starts to formulate, we start to get a picture of what success looks like and how to measure it. For example, if we know that creating a complicated model will negatively impact the delivery timeline, we can relax the acceptable prediction accuracy criteria for the model and reduce the time it takes to develop a production-grade model. Additionally, if the business goal is simply to help the fishermen determine the abalone age but we have no way of tracking whether they abide by the recommendation, then our success criteria can be measured – not in terms of the model's accuracy but how often the Age Calculator is accessed and used. For instance, if we get 10 application hits a day, then the project can be deemed successful.

While these are only examples of what this stage of the process might look like, it illustrates that careful forethought and planning, along with a very specific set of objectives, must be outlined before any ML processes can start. It also illustrates that this stage of the process cannot be automated, though having a set plan with predefined objectives creates the foundation on which an automation framework could potentially be incorporated.

Getting insights from the data

Now that the overall business case is in place, we can dive into the meat of the actual ML process, starting with the data stage. As shown in the following diagram, the data stage is the first individual step within the framework of the business case:

Figure 1.3 – The data stage

Figure 1.3 – The data stage

It is at this point that we determine what data is available, how to ingest the data, what the data looks like, what characteristics of the data are most relevant to predicting the age, and which features need to be re-engineered to create the most optimal production-ready model.

Important Note

It is a well-known fact that the data acquisition and exploratory analysis part of the process can account for 70%–80% of the overall effort.

A model worthy of being considered production-ready is only as good as the data it has been trained on. The data needs to be fully analyzed and completely understood to extract the most relevant features for model building and training. We can accomplish this using a technique commonly referred to as Exploratory Data Analysis (EDA), where we assess the statistical components of the data, potentially visualizing and creating charts to fully grasp feature relevance. Once we have grasped the feature's importance, we might choose to get more important data, remove unimportant data, and potentially engineer new facets of the data, all to have the trained model learn from these optimal features.

Let's walk through an example of what this stage of the process might look like for the Age Calculator use case.

Sourcing, ingesting, and understanding the data

For our example, we will be using the Abalone Dataset.

Note

The Abalone Dataset is sourced from the University of California, Irvine's ML repository: Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.

This dataset contains the various physical characteristics of the abalone that can be used to determine its age. The following steps will walk you through how to access and explore the dataset:

  1. We can load the dataset with the following sample Python code, which uses the pandas library (https://pandas.pydata.org) to ingest the data in a comma-separated value (csv) format using the read_csv() method. Since the source data doesn't have any column names, we can review the Attribute Information section of the dataset website and manually create our column_names:
    import pandas as pd
    column_names = ["sex", "length", "diameter", "height", "whole_weight", "shucked_weight", "viscera_weight", "shell_weight", "rings"]
    abalone_data = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.data", names=column_names)
  2. Now that the data has been downloaded, we can start analyzing it as a DataFrame. First, we will take a sample of the first five rows of the data to ensure we have successfully downloaded it and verify that it matches the attribute information highlighted on the website. The following sample Python code calls the head() method on the abalone_data DataFrame:
    abalone_data.head()

The following screenshot shows the output of executing this call:

Figure 1.4 – The first five rows of the Abalone Dataset

Figure 1.4 – The first five rows of the Abalone Dataset

Although we are only viewing the first five rows of the data, it matches the attribute information provided by the repository website. For example, we can see that the sex column has nominal values showing if the abalone is male (M), female (F), or an infant (I). We also have the rings column, which is used to determine the age of the abalone. The additional columns, such as weight, diameter, and height, detail additional characteristics of the abalone. These characteristics all contribute to determining its age (in years). The age is calculated using the number of rings, plus 1.5.

  1. Next, we can use the following sample code to call the describe() method on the abalone_data DataFrame:
    abalone_data.describe()

The following screenshot shows the summary statistics of the dataset, as well as various statistical details, such as the percentile, mean, and standard deviation:

Figure 1.5 – The summary statistics of the Abalone Dataset

Figure 1.5 – The summary statistics of the Abalone Dataset

Note

At this point, we can gain an understanding of the data by visualizing and plotting any correlations between the key features to further understand how the data is distributed, as well as to determine the most important features in the dataset. We should also determine whether we have missing data and if we have enough data.

Only using summary statistics to understand the data can often be misleading. Although we will not be performing these visualization tasks on this example, you can review why using graphical techniques is so important to understanding data by looking at the Anscombe's Quartet example on Kaggle (https://www.kaggle.com/carlmcbrideellis/anscombe-s-quartet-and-the-importance-of-eda).

The previous tasks highlight a few important observations we derived from the summary statistics of the dataset. For example, after reviewing the descriptive statistics from the dataset (Figure 1.5), we made the following important observations:

  • The count value for each column is 4177. We can deduce that we have the same number of observations for each feature and therefore, no missing values. This means that we won't have to somehow infer what these missing values might be or remove the row containing them from the data. Most ML algorithms fail if data is missing.
  • If you look at the 75% value for the rings column, there is a significant variance between the 11 rings and that of the max amount of rings, which is 29. This means that the data potentially contains outliers that could add unnecessary noise and influence the overall model effectiveness of the trained model.
  • While the sex column is visible in Figure 1.4, the summary statistics displayed in Figure 1.5 do not include it. This is because of the type of data in this column. If you refer to the Attribute Information section of the dataset's website (https://archive.ics.uci.edu/ml/datasets/abalone), you will see that this sex column is comprised of nominal data. This type of data is used to provide a label or category for data that doesn't have a quantitative value. Since there is no quantitative value, the summary statistics for this column cannot be displayed. Depending on the type of ML algorithm that's selected to address the business objective, we may need to convert this data into a quantitative format as not all ML algorithms will work with nominal data.

The next set of steps will help us apply what we have learned from the dataset to make it more compatible with the model training part of the process:

  1. In this step, we focus on converting the sex column into quantitative data. The sample code highlights using the get_dummies() method on the abalone_data DataFrame, which will convert the categories of Male (M), Female (F), and Infant (I) into separate feature columns. Here, the data in these new columns will either reflect one of the categories, represented by a one (1) if true or a zero (0) if false:
    abalone_data = pd.get_dummies(abalone_data)
  2. Running the head() method again now shows the first five rows of the newly converted data:
    Abalone_data.head()

The following screenshot shows the first five rows of the converted dataset. Here, you can see that the sex column has been removed and that, in its place, there are three new columns (one for each new category) with the data now represented as discrete values of 1 or 0:

Figure 1.6 – The first five rows of the converted Abalone Dataset

Figure 1.6 – The first five rows of the converted Abalone Dataset

  1. The next step in preparing the data for model building and training is to separate the rings column from the data to establish it as the target, or variable, we are trying to predict. The following sample code shows this:
    y = abalone_data.rings.values
    del abalone_data["rings"]
  2. Now that the target variable has been isolated, we can normalize the features. Not all datasets require normalization, however. By looking at Figure 1.5, we can see that the summary statistics show that the features have different ranges. These different ranges, especially if the values are large, can influence the overall effectiveness of the model during training. Thus, by normalizing the features, the model can converge to a global minimum much faster. The following code sample shows how the existing features can be normalized by first converting it into a NumPy array (https://numpy.org) and then using the normalize() method from the scikit-learn or sklearn Python library (https://scikit-learn.org/stable/):
    import numpy as np
    from sklearn import preprocessing
    X = abalone_data.values.astype(np.float)
    X = preprocessing.normalize(X)

Based on the initial observations from the dataset, we have applied the necessary transformations to prepare the features for model training. For example, we converted the sex column from a nominal data type into a quantitative data type since this data will play an important part in determining the age of an abalone.

From this example, you can see that goal of the Data step is to focus on exploring and understanding the dataset. We also use this step to apply what we've learned and change the data or preprocess it into a representation that suits the downstream model building and training process.

Building the right model

Now that the data has been ingested, analyzed, and processed, we are ready to move onto the next stage of the ML process, where we will look at building the right ML model to suit both the business use case as well as to match it to our newly acquired understanding of the data:

Figure 1.7 – The model building stage

Figure 1.7 – The model building stage

Unfortunately, there is no one size fits all algorithm that can be applied to every use case. However, by taking the knowledge we have gleaned from both the business objective and dataset, we can define a list of potential algorithms to use.

For example, we know from our business case that we want to predict the age of the abalone by using the number of rings to get its age. We also know from analyzing and understanding the dataset that we have a target or labeled variable from the rings column. This target variable is a discrete, numerical value between 1 and 29, so we can refine our list of possible algorithms to a supervised learning algorithm that predicts a numerical value among a discrete set of possible values.

The following are just a few of the possible algorithms that could be applied to the example business case:

  • Linear regression
  • Support vector machines
  • Decision trees
  • Naïve Bayes
  • Neural networks

Once again, there is no one algorithm in this list that perfectly matches the use case and the data. Therefore, the ML process is an experiment to work through multiple possible permutations, get insight from each permutation, and apply what has been learned to further refine the optimal model.

Some of the additional factors that influence which algorithm to start with are based on the ML practitioner's experience, plus how the chosen algorithm addresses the required business goals and success measurements. For example, if a required success criterion is to have the model completed within 2 weeks, then that might eliminate the option to use a more complicated algorithm.

Building a neural network model

Continuing with the Age Calculator experiment, we will implement a neural network algorithm, also referred to as Artificial Neural Network (ANN), Deep Neural Network (DNN), or Multilayer Perceptron (MLP).

At a high level, a neural network is an artificial construct modeled on the brain, whereby small, non-linear calculations are made on the data by what is commonly referred to as a neuron or perceptron. By grouping these neurons into individual layers and then compounding these layers together, we can assemble the building blocks of a mechanism that takes the data as input and finds the dependencies (or correlations) for the output (or target). Through an optimization process, these dependencies are further refined to get the predicted output as close as possible to the actual target value.

Note

The primary reason a neural network model is being used in this example is to introduce a deep learning framework. Deep learning frameworks, such as PyTorch (https://pytorch.org/), TensorFlow (https://www.tensorflow.org/), and MXNet (https://mxnet.apache.org/), can be used to create more complicated neural networks. However, from the perspective of ML process automation, they can also introduce several complexities. So, by making use of a deep learning framework, we can lay the foundation to address some of these complexities later in this book.

The following is a graphical representation of the neural network architecture that we will be building for our example:

Figure 1.8 – Neural network architecture

Figure 1.8 – Neural network architecture

The individual components that make up this architecture will be explained in the following steps:

  1. To start building the model architecture, we need to load the necessary libraries from the TensorFlow deep learning framework. Along with the tensorflow libraries, we will also import the Keras API. The Keras (https://keras.io/) library allows us to create higher-level abstractions of the neural network architecture that are easier to understand and work with. For example, from Keras, we also load the Sequential and Dense classes. These classes allow us to define a model architecture that uses sequential neural network layers and define the type and quantity of neurons in each of these layers:
    import tensorflow as tf
    from tensorflow import keras
    from tensorflow.keras.models import Sequential
    from tensorflow.keras.layers import Dense
  2. Next, we can use the Dense class to define the list of layers that make up the neural network:
    network_layers = [
        Dense(256, activation='relu', kernel_initializer="normal", input_dim=10),
        Dense(128, activation='relu'),
        Dense(64, activation='relu'),
        Dense(32, activation='relu'),
        Dense(1, activation='linear')
    ]
  3. Next, we must define the model as being a Sequential() model or simply a list of layers:
    model = Sequential(network_layers)
  4. Once the model structure has been defined, we must compile it for training using the compile() method:
    model.compile(optimizer="adam", loss="mse", metrics=["mae", "accuracy"])
  5. Once the model has been compiled, the summary() method can be called to view its architecture:
    model.summary()

The following screenshot shows the results of calling this method. Even though it's showing text output, the network architecture matches the one shown in Figure 1.8:

Figure 1.9 – Summary of the compiled neural network architecture

Figure 1.9 – Summary of the compiled neural network architecture

As you can see, the first layer of the model matches Layer 1 in Figure 1.8, where the Dense() class is used to express that this layer has 256 neurons, or units, that connect to every neuron in the next layer. Layer 1 also initializes the parameters (model weights and bias) so that each neuron behaves differently and captures the different patterns we wish to optimize through training. Layer 1 is also configured to expect input data that has 10 dimensions. These dimensions correspond to the following features of the Abalone Dataset:

  • Length
  • Diameter
  • Height
  • Whole Weight
  • Shucked Weight
  • Viscera Weight
  • Shell Weight
  • Sex_F
  • Sex_I
  • Sex_M

Layer 1 is also configured to use the nonlinear Rectified Linear Unit (ReLU) activation function, which allows the neural network to learn complex relationships from the dataset. We then repeat the process, adding Layer 2 through Layer 4, specifying that each of these layers has 128, 64, 32, and 1 neuron(s) or unit(s), respectively. The final Layer only has a single output – the predicted number of rings. Since the objective of the model is to determine how this output relates to the actual number of rings in the dataset, a linear activation function is used.

Once we have constructed the model architecture, we use the following important parameters to compile the model using the compile() method:

  • Loss: This parameter specifies the type of objective function (also referred to as the cost function) that will be used. At a high level, the objective function calculates how far away or how close the predicted result is to the actual value. It calculates the amount of error between the number of rings that the model predicts, based on the input data, versus what the actual number of rings is. In this example, the Mean Squared Error (MSE) is used as the objective function, where the average amount of error is measured across all the data points.
  • Optimizer: The objective during training is to minimize the amount of error between the predicted number of rings and the actual number of rings. The Adam optimizer is used to iteratively update the neural network weights that contribute to reducing the loss (or error).
  • Metrics: The evaluation metrics, Mean Absolute Error (MAE), and prediction accuracy are captured during model training and used to provide insight into how effectively the model is learning from the input data.

    Note

    If you are unfamiliar with any of these terms, there are a significant amount of references available when you search for them. Additionally, you may find it helpful to take the Deep Learning Specialization course offered by Coursera (https://www.coursera.org/specializations/deep-learning). Further details on these parameters can be found in the Keras API documentation (https://keras.io/api/models/model_training_apis/#compile-method).

Now that we have built the architecture for the neural network algorithm, we need to see how it fits on top of the preprocessed dataset. This task is commonly referred to as training the model.

Training the model

The next step of the ML process, as illustrated in the following diagram, is to train the dataset on the preprocessed abalone data:

Figure 1.10 – The model training stage

Figure 1.10 – The model training stage

Training the compiled model is relatively straightforward. The following steps outline how to kick off the model training part of the process:

  1. This first step is not necessary to train the model, but sometimes, the output from the training process can be unwieldy and difficult to interpret. Therefore, a custom class called cleanPrint() can be created to ensure that the training output is neat. This class uses the Keras Callback() method to print a dash ("-") as the training output:
    class cleanPrint(keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs):
            if epoch+1 % 100 == 0:
                print("!")
            else:
                print("-", end="")

    Note

    It is a good practice to display the model's performance at each epoch as this provides insight into the improvements after each epoch. However, since we are training for 2000 epochs, we are using the cleanPrint() class to make the output neater. We will remove this callback later.

  2. Next, we must separate the preprocessed abalone data into two main groups –  one for the training data and one for testing data. The splitting process is performed by using the train_test_split() method from the model_selection() class of the sklearn library:
    from sklearn.model_selection import train_test_split 
    training_features, testing_features, training_labels, testing_labels = train_test_split(X, y, test_size=0.2, random_state=42)
  3. The final part of the training process is to launch the model training process. This is done by calling the fit() method on the compiled model and supplying the training_features and training_labels datasets, as shown in the following example code:
    training_results = model.fit(training_features, training_labels, validation_data=(testing_features, testing_labels), batch_size=32, epochs=2000, shuffle=True, verbose=0, callbacks=[cleanPrint()])

Now that the model training process has started, we can review a few key aspects of our code. First, splitting the data into training and testing datasets is typically performed as part of the data preprocessing step. However, we are performing this task during the model training step to provide additional context to the loss and optimization functions. For example, creating these two separate datasets is an important part of evaluating how well the model is being trained. The model is trained using the training dataset and then its effectiveness is evaluated against the testing dataset. This evaluation procedure guides the model (using the loss function and the optimization function) to reduce the amount of error between the predicted number of rings and the actual number of rings. In essence, this makes the model better or optimizes the model. To create a good split of training and testing data, we must provide four additional variables, as follows:

  • training_features: The 10 columns of the Abalone Dataset that correspond to the abalone attributes, comprising 80% of these observations.
  • testing_features: The same 10 columns of the Abalone Dataset, comprising the other 20% of the observations.
  • training_labels: The number of rings (target label) for each observation in the training_features dataset.
  • testing_labels: The number of rings (target label) for each observation in the testing_features dataset.

    Tip

    Further details about each of these parameters, as well as more parameters that you can use to tweak the training process, can be found in the Keras API documentation (https://keras.io/api/models/model_training_apis/#fit-method).

Secondly, once the data has been successfully split, we can use the fit() method and add the following parameters to further govern the training process:

  • validation_data: The testing_features and testing_labels datasets, which the model uses to evaluate how well the trained neural network weights reduce the amount of error between the predicted number of rings and the actual number of rings in the testing data.
  • batch_size: This parameter defines the number of samples from the training data that are propagated through the neural network. This parameter can be used to influence the overall speed of the training process. The higher batch_size is, the higher the number of samples that are used from the training data, which means the higher the number of samples that are combined to estimate the loss before updating the neural network's weights.
  • epochs: This parameter defines how many times the training process will iterate through the training data. The higher epochs is, the more iterations must be made through the training data to optimize the neural network's weights.
  • shuffle: This parameter specifies whether to shuffle the data before starting a training iteration. Shuffling the data each time the model iterates through the data forces the model to generalize better and prevent it from learning ordered patterns in the training data.
  • verbose and callbacks: These parameters are related to displaying the training progress and output for each epoch. Setting the output to zero and using the cleanPrint() class will simply display a dash (-) as the output for each epoch.

The training process should take 12 minutes to complete, providing us with a trained model object. In the next section, we will use the trained model to evaluate how well it makes predictions on new data.

Evaluating the trained model

Once the model has been trained, we can move on to the next stage of the ML process: the model evaluation stage. It is at this stage that the trained model is evaluated against the objectives and success criterion that have been established within the business use case, with the goal being to determine if the trained model is ready for production or not:

Figure 1.11 – The model evaluation step

Figure 1.11 – The model evaluation step

When evaluating a trained model, most ML practitioners simply score the quality of the model predictions using an evaluation metric that is suited to the type of model. Other ML practitioners go one step further to visualize and further understand the predictions. The following steps will walk you through using the latter of these two approaches:

  1. Using the following sample code, we can load the necessary Python libraries. The first library is matplotlib. The pyplot() class is a collection of different functions that allow for interactive and programmatic plot generation. The second library, mean_squarred_error(), comes from the sklearn package and provides the ML practitioner with an easy way to evaluate the quality of the model using the Root Mean Squared Error (RMSE) metric. Since the neural network model is a supervised learning-based regression model, RMSE is a popular method that's used to measure the error rate of the model predictions:
    import matplotlib.pyplot as plt
    from sklearn.metrics import mean_squared_error
  2. The imported libraries are then used to visualize the predictions to provide a better understanding of the model's quality. The following code generates a plot that incorporates the information that's required to quantify the prediction's quality:
    fig, ax = plt.subplots(figsize=(15, 10))
    ax.plot(testing_labels, model.predict(testing_features), "ob")
    ax.plot([0, 25], [0, 25], "-r")
    ax.text(8, 1, f"RMSE = {mean_squared_error(testing_labels, model.predict(testing_features), squared=False)}", color="r", fontweight=1000)
    plt.grid()
    plt.title("Abalone Model Evaluation", fontweight="bold", fontsize=12)
    plt.xlabel("Actual 'Rings'", fontweight="bold", fontsize=12)
    plt.ylabel("Predicted 'Rings'", fontweight="bold", fontsize=12)
    plt.legend(["Predictions", "Regression Line"], loc="upper left", prop={"weight": "bold"})
    plt.show()

Executing this code will create two sub-plots. The first sub-plot is a scatterplot displaying the model predictions from the test dataset, as well as the ground truth labels. The second sub-plot superimposes a regression line over these predictions to highlight the linear relationship between the predicted number of rings versus the actual number of rings. The rest of the code labels the various properties of the plot and displays the RMSE score of the predictions. The following is an example of this plot:

Figure 1.12 – An example Abalone Model Evaluation scatterplot

Figure 1.12 – An example Abalone Model Evaluation scatterplot

Three things should immediately stand out here:

  • The RMSE evaluation metric scores the trained model at 2.54.
  • The regression line depicting the correlation between the actual number of rings and the predicted number of rings does not pass through the majority of the predictions.
  • There are a significant number of predictions that are far away from the regression line on both the positive and negative scales. This shows a high error rate between the number of rings that are predicted versus the actual number of rings for a data point.

These observations and others should be compared to the objectives and success criteria that are outlined in the business use case. Both the ML practitioner and business owner can then judge whether the trained model is ready for production.

For example, if the primary objective of the Age Calculator application is to use the model predictions as a rough guide for the fishermen to get a simple idea of the abalone age, then the model does this and can therefore be considered ready for production. If, on the other hand, the primary goal of the Age Calculator application is to provide an accurate age prediction, then the example model probably cannot be considered production-ready.

So, if we determine that the model is not ready for production, what are the subsequent steps of the ML process? The next section will review some options.

Exploring possible next steps

Since the model has been deemed unfit for production, several approaches can be taken after the model evaluation stage. The following diagram highlights three possible options that can be considered as possible next steps:

Figure 1.13 – Next step options

Figure 1.13 – Next step options

Let's explore these three possible next steps in more depth to determine which option best suits the objectives of the Age Calculator use case.

Option 1 – get mode data

The first option requires the ML practitioner to go back to the beginning of the process and acquire more data. Since the UCI abalone repository is the only publicly available dataset, this task might involve physically gathering more observations by manually fishing for abalone or conducting a survey with fishermen on their catch. Either way, this takes time!

However, simply adding more observations to the dataset does not necessarily translate to a better-quality model. So, getting more data could also mean getting better-quality features. This means that the ML practitioner would need to reevaluate the existing data, dive further into the analysis to better understand which of the features are of the most importance, and then re-engineer those features or create new features from them. This too is time-consuming!

Option 2 – choose another model

The second option to consider involves building an entirely new model using a completely different algorithm that still matches the use case. For example, the ML practitioner might investigate using another supervised learning, regression-based algorithm.

Different algorithms might also require the data to be restructured so that it's more suited to the algorithm's required type of input. For example, choosing a Gradient Boosting Regression algorithm, such as XGBoost, requires the target label to be the first column in the dataset. Choosing another algorithm and reengineering the data requires additional time!

Option 3 – tuning the existing model

Recall that when the existing neural network model was built, there were a few tunable parameters that were configured during its compilation. For example, the model was compiled using particular optimizer and loss functions.

Additionally, when the existing neural network model was trained, other tunable parameters were supplied, such as the number of epochs and the batch size.

Note

There is no best practice for choosing the right option. Remember that each iteration through the process is an experiment whereby the goal is to glean more information from the experiment to determine the next course of action or next option.

While Option 3 may seem straightforward, in the next section, you will see that this option also involves multiple potential iterations and is therefore also time-consuming.

Tuning our model

As we've already highlighted, multiple parameters or hyperparameters can be tuned to better tune or optimize an existing model. Hence, this stage of the process is also referred to as hyperparameter optimization. The following diagram shows what the hyperparameter optimization process entails:

Figure 1.14 – The hyperparameter optimization process

Figure 1.14 – The hyperparameter optimization process

After evaluating the model to determine which hyperparameters can be tweaked, the model is trained using these parameters. The trained model is, once again, compared to the business objectives and success criterion to determine if it is ready for production. This process is then repeated, constantly tweaking, training, and evaluating until a production-ready model is produced.

Determining the best hyperparameters to tune

Once again, there is no exact approach to getting the optimal hyperparameters. Each iteration through the process helps narrow down which combination of hyperparameters contributes to a more optimized model.

However, a good place to start the process is to dive deeper into what is happening during model training and derive further insights into how the model is learning from the data.

You will recall that, when executing the fit() method to train the model and by binding the results to the training_results parameter, we are able to get additional metrics that were needed for model tuning. The following steps will walk you through an example of how to extract and visualize these metrics:

  1. By using the history() method on the training_results parameter, we can use the following sample code to plot the prediction error for both the training and testing processes.
    plt.rcParams["figure.figsize"] = (15, 10)
    plt.plot(training_results.history["loss"])
    plt.plot(training_results.history["val_loss"])
    plt.title("Training vs. Testing Loss", fontweight="bold", fontsize=14)
    plt.ylabel("Loss", fontweight="bold", fontsize=14)
    plt.xlabel("Epochs", fontweight="bold", fontsize=14)
    plt.legend(["Training Loss", "Testing Loss"], loc="upper right", prop={"weight": "bold"})
    plt.grid()
    plt.show()

The following is an example of what the plot might look like after executing the preceding code:

Figure 1.15 – Training vs. Testing Loss

Figure 1.15 – Training vs. Testing Loss

  1. Similarly, by replacing the loss and val_loss parameters in the sample code with mae and val_mae, respectively, we can see a consistent trend:
    plt.rcParams["figure.figsize"] = (15, 10)
    plt.plot(training_results.history["mae"])
    plt.plot(training_results.history["val_mae"])
    plt.title("Training vs. Testing Mean Absolute Error", fontweight="bold", fontsize=14)
    plt.ylabel("mae", fontweight="bold", fontsize=14)
    plt.xlabel("Epochs", fontweight="bold", fontsize=14)
    plt.legend(["Training MAE", "Testing MAE"], loc="upper right", prop={"weight": "bold"})
    plt.grid()
    plt.show()

After executing the preceding code, we will get the following output:

Figure 1.16 – Training vs. Testing Mean Absolute Error

Figure 1.16 – Training vs. Testing Mean Absolute Error

Both Figure 1.16 and Figure 1.15 clearly show a few especially important trends:

  • There is a clear divergence between what the model is learning from the training data and its predictions on the testing data. This indicates that the model is not learning anything new as it trains and is essentially overfitting the data. The model relates to the training data and is unable to relate to new, unseen data in the testing dataset.
  • This divergence seems to happen around 250 epochs/training iterations. Since the training process was set to 2,000 epochs, this indicates that the model is being over-trained, which could be the reason it is overfitting the training data.
  • Both the testing MAE and the testing loss have an erratic gradient. This means that as the model parameters are being updated through the training process, the magnitude of the updates is too large, resulting in an unstable neural network, and therefore unstable predictions on the testing data. So, the fluctuations depicted by the plot essentially highlight an exploding gradient problem, indicating that the model is overfitting the data.

Based on these observations, several hyperparameters can be tuned. For example, an obvious parameter to change is the number of epochs or training iterations to prevent overfitting. Similarly, we could change the optimization function from Adam to Stochastic Gradient Descent (SGD). SGD allows a specific learning rate to be set as one of its parameters, as opposed to the adaptive learning rate used by the Adam optimizer. By specifying a small learning rate parameter, we are essentially rescaling the model updates to ensure that they are small and controlled.

Another solution might be to use a regularization technique, such as L1 or L2 regularization, to penalize some of the neurons on the model, thus creating a simpler neural network. Likewise, simplifying the neural network architecture by reducing the number of layers and neurons within each layer would have the same effect as regularization.

Lastly, reducing the number of samples or batch size can control the stability of the gradient during training.

Now that we have a fair idea of which hyperparameters to tweak, the next section will show you how to further optimize the model.

Tuning, training, and reevaluating the existing model

We can start model tuning by walking through the following steps:

  1. The first change we must make is to the neural network architecture itself. The following example code depicts the new structure, where only two network layers are used instead of four. Each layer only has 64 neurons:
    network_layers = [
        Dense(64, activation='relu', kernel_initializer="normal", input_dim=10),
        Dense(64, activation='relu'),
        Dense(1, activation='linear')
    ]
  2. Once again, the model is recompiled using the same parameters as those from the previous example:
    model = Sequential(network_layers)
    model.compile(optimizer="adam", loss="mse", metrics=["mae", "accuracy"])
    model.summary()

The following screenshot shows the text summary of the tuned neural network architecture:

Figure 1.17 – Summary of the tuned neural network architecture

Figure 1.17 – Summary of the tuned neural network architecture

The following diagram shows a visual representation of the turned neural network architecture:

Figure 1.18 – Tuned neural network architecture

Figure 1.18 – Tuned neural network architecture

  1. Lastly, the fit() method is called on the new model. However, this time, the number of epochs has been reduced to 200 and batch_size has also been reduced to 8:
    training_results = model.fit(training_features, training_labels, validation_data=(testing_features, testing_labels), batch_size=8, epochs=200, shuffle=True, verbose=1)

    Note

    In the previous code example, the cleanPrint() callback has been removed to show the evaluation metrics on both the training and validation data at 200 epochs.

  2. Once the new model training has been completed, the previously used evaluation code can be re-executed to display the evaluation scatterplot. The following is an example of this scatterplot:
Figure 1.19 –Abalone Evaluation scatterplot

Figure 1.19 –Abalone Evaluation scatterplot

The new model does not capture all the predictions as there are still several outliers on the positive and negative scales. However, there is a drastic improvement to the overall fit on most data points. This is further quantified by the RMSE score dropping from 2.54 to 2.08.

Once again, these observations should be compared to the objectives and the success criteria that are outlined in the business use case to gauge whether the model is ready for production.

As the following diagram illustrates, if a production-ready model cannot be found, then the options to further tune the model, get and engineer more data, or build a completely different model are still available:

Figure 1.20 – Additional process options

Figure 1.20 – Additional process options

Should the model be deemed as production-ready, the ML practitioner can move onto the final stage of the ML process, As shown in the following diagram this is the model deployment stage:

Figure 1.21 – The model deployment stage

Figure 1.21 – The model deployment stage

In the next section, we will review the processes involved in deploying the model into production.

Deploying the optimized model into production

Model deployment is somewhat of a gray area in that some ML practitioners do not apply this stage to their ML process. For example, some ML practitioners may feel that the scope of their task is to simply provide a production-ready ML model that addresses the business use case. Once this model has been trained, they simply hand it over to the application development teams or application owners for them to test and integrate the model into the application.

Alternatively, some ML practitioners will work with the application teams to deploy the model into a test or Quality Assurance (QA) environment to ensure that the trained model successfully integrates with the application.

Whatever the scope of the ML practitioner role, model deployment is part of the CRISP-DM methodology and should always be factored into the overall ML process, especially if the ML process is to be automated.

While the CRISP-DM methodology ends with the model deployment stage, as shown in the preceding diagram, the process is, in fact, a continuous process. Once the model has been deployed into a production application, it needs to be constantly monitored to ensure that it does not drift from its intended purpose, to consistently provide accurate predictions on unseen data or new data. Should this situation arise, the ML practitioner will be called upon to start the ML process again to reoptimize the model and make it generalize to this new data. The following diagram shows what the ML process looks like in reality:

Figure 1.22 – Closing the loop

Figure 1.22 – Closing the loop

So, once again, why is the ML process hard?

Using this simple example use case, you can hopefully see that not only are there inherent complexities to the process of exploring the data, as well as building, training, evaluating, tuning, deploying, and monitoring the model – the entire process is also complex, manual, iterative, and continuous.

How can we streamline the process to ensure that the outcome is always an optimized model that matches the business use case? This is where AutoML comes into play.

Streamlining the ML process with AutoML

AutoML is a broad term that has different a meaning depending on who you ask. When referring to AutoML, some ML practitioners may point to a dedicated software application, a set of tools/libraries, or even a dedicated cloud service. In a nutshell, AutoML is a methodology that allows you to create a repeatable, reliable, streamlined, and, of course, automated ML process.

The process is repeatable in that it follows the same pattern every time it is executed. The process is reliable in that it guarantees that an optimized model that matches the use case is always produced. The process is streamlined and any unnecessary steps are removed, making it as efficient as possible. Finally, and most importantly, the process can be started and executed automatically and triggered by an event, such as retraining the model after model concept drift has been detected.

AWS provides multiple capabilities that can be used to build a streamlined AutoML process. In the next section, I will highlight some of the dedicated cloud services, as well as other services, that can be leveraged to make the ML process easier and automated.

How AWS makes automating the ML development and deployment process easier

The focus of the remaining chapters in this book will be to practically showcase, using hands-on examples, how the ML process can be automated on AWS. By expanding on the Age Calculator example, you will see how various AWS capabilities and services can be used to do this. For example, the next two chapters of this book will focus on how to use some of the native capabilities of the AWS AI/ML stack, such as the following:

  • Using SageMaker Autopilot to automatically create, manage, and deploy an optimized abalone prediction model using both codeless as well as coded methods.
  • Using the AutoGluon libraries to determine the best deep learning algorithm to use for the abalone model, as well as an example for more complicated ML use cases, such as computer vision.

Parts two, three, and four of this book will focus on leveraging other AWS services that are not necessarily part of the AI/ML stack, such as the following:

  • AWS CodeCommit and CodePipeline, which will deliver the abalone use case using a Continuous Integration and Continuous Delivery (CI/CD) pipeline.
  • AWS Step Functions and the Data Science Python SDK, to create a codified pipeline to produce the abalone model.
  • Amazon Managed Workflows for Apache Airflow (MWAA), to automate and manage the ML process.

Finally, part five of this book will expand on some of the central topics that were covered in parts two and three to provide you with a hands-on example of how a cross-functional, agile team can implement the end-to-end Abalone Calculator example as part of a Machine Learning Software Development Life Cycle (MLSDLC).

Summary

As I stated from the outset, the primary goal of this chapter was to emphasize the many challenges an ML practitioner may face when building an ML solution for a business use case. In this chapter, I introduced you to an example ML use case – the Abalone Calculator – and I used it to show you just how hard the ML process is in reality.

By walking through each step of the process, I explained the complexities involved therein, as well as the challenges you could potentially encounter. I also highlighted why the ML process is complicated, manual, iterative, and continuous, which set the stage for an automated process that is repeatable, streamlined, and reliable using AutoML.

In the next chapter, we will explore how to start implementing an AutoML methodology by introducing you to a native AWS service called SageMaker Autopilot.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore the various AWS services that make automated machine learning easier
  • Recognize the role of DevOps and MLOps methodologies in pipeline automation
  • Get acquainted with additional AWS services such as Step Functions, MWAA, and more to overcome automation challenges

Description

AWS provides a wide range of solutions to help automate a machine learning workflow with just a few lines of code. With this practical book, you'll learn how to automate a machine learning pipeline using the various AWS services. Automated Machine Learning on AWS begins with a quick overview of what the machine learning pipeline/process looks like and highlights the typical challenges that you may face when building a pipeline. Throughout the book, you'll become well versed with various AWS solutions such as Amazon SageMaker Autopilot, AutoGluon, and AWS Step Functions to automate an end-to-end ML process with the help of hands-on examples. The book will show you how to build, monitor, and execute a CI/CD pipeline for the ML process and how the various CI/CD services within AWS can be applied to a use case with the Cloud Development Kit (CDK). You'll understand what a data-centric ML process is by working with the Amazon Managed Services for Apache Airflow and then build a managed Airflow environment. You'll also cover the key success criteria for an MLSDLC implementation and the process of creating a self-mutating CI/CD pipeline using AWS CDK from the perspective of the platform engineering team. By the end of this AWS book, you'll be able to effectively automate a complete machine learning pipeline and deploy it to production.

Who is this book for?

This book is for the novice as well as experienced machine learning practitioners looking to automate the process of building, training, and deploying machine learning-based solutions into production, using both purpose-built and other AWS services. A basic understanding of the end-to-end machine learning process and concepts, Python programming, and AWS is necessary to make the most out of this book.

What you will learn

  • Employ SageMaker Autopilot and Amazon SageMaker SDK to automate the machine learning process
  • Understand how to use AutoGluon to automate complicated model building tasks
  • Use the AWS CDK to codify the machine learning process
  • Create, deploy, and rebuild a CI/CD pipeline on AWS
  • Build an ML workflow using AWS Step Functions and the Data Science SDK
  • Leverage the Amazon SageMaker Feature Store to automate the machine learning software development life cycle (MLSDLC)
  • Discover how to use Amazon MWAA for a data-centric ML process

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 15, 2022
Length: 420 pages
Edition : 1st
Language : English
ISBN-13 : 9781801811828
Category :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. €18.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 15, 2022
Length: 420 pages
Edition : 1st
Language : English
ISBN-13 : 9781801811828
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 105.97
Automated Machine Learning on AWS
€36.99
Machine Learning Engineering on AWS
€35.99
Getting Started with Amazon SageMaker Studio
€32.99
Total 105.97 Stars icon

Table of Contents

17 Chapters
Section 1: Fundamentals of the Automated Machine Learning Process and AutoML on AWS Chevron down icon Chevron up icon
Chapter 1: Getting Started with Automated Machine Learning on AWS Chevron down icon Chevron up icon
Chapter 2: Automating Machine Learning Model Development Using SageMaker Autopilot Chevron down icon Chevron up icon
Chapter 3: Automating Complicated Model Development with AutoGluon Chevron down icon Chevron up icon
Section 2: Automating the Machine Learning Process with Continuous Integration and Continuous Delivery (CI/CD) Chevron down icon Chevron up icon
Chapter 4: Continuous Integration and Continuous Delivery (CI/CD) for Machine Learning Chevron down icon Chevron up icon
Chapter 5: Continuous Deployment of a Production ML Model Chevron down icon Chevron up icon
Section 3: Optimizing a Source Code-Centric Approach to Automated Machine Learning Chevron down icon Chevron up icon
Chapter 6: Automating the Machine Learning Process Using AWS Step Functions Chevron down icon Chevron up icon
Chapter 7: Building the ML Workflow Using AWS Step Functions Chevron down icon Chevron up icon
Section 4: Optimizing a Data-Centric Approach to Automated Machine Learning Chevron down icon Chevron up icon
Chapter 8: Automating the Machine Learning Process Using Apache Airflow Chevron down icon Chevron up icon
Chapter 9: Building the ML Workflow Using Amazon Managed Workflows for Apache Airflow Chevron down icon Chevron up icon
Section 5: Automating the End-to-End Production Application on AWS Chevron down icon Chevron up icon
Chapter 10: An Introduction to the Machine Learning Software Development Life Cycle (MLSDLC) Chevron down icon Chevron up icon
Chapter 11: Continuous Integration, Deployment, and Training for the MLSDLC Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(10 Ratings)
5 star 90%
4 star 10%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Ashish Patel Jun 24, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
👉 It explains how to automatically configure a practical step for the ML process, its importance and an important governance factor to consider when performing this automation, and how to use the various AWS features that drive ML products to production.👉 The AutoML system, which aims to automate end-to-end ML model development, is highly demanding. Whereas this Amazon Sage Maker autopilot framework allows us to perform this critical steps in the typical ML process, which includes data mining, algorithm selection, model training, and model optimization.👉 In the Practise of AutoML, Autogluon provides an AutoML methodology that focuses on automated stack ensemble, deep learning, and real-world applications spanning images, text, and tabular data. We can wrap the ML application with SageMaker BYOC (bring your own container) or AWS Deep Learning Container services.👉 The CI / CD pipeline is the backbone of modern software development lifecycle (SDLC) and machine learning lifecycle (MLSDLC) automation. AWS CDK help in CI / CD approaches to machine learning allow you to scale ML in your organization, maintain a balanced development and production environment, and perform version control, on-demand testing, and ultimately automation.👉 Each CI / CD Pipeline has some limitations, such as the ML model process (from the point of view of ML practitioners) and all the paths to automated model deployment (from the perspective of application development and operation teams) Some AWS services, such as AWS CodePipeline and AWS CodeCommit, help to overcome this.👉 AWS Step Functions lets you build resilient workflows using AWS services such as Amazon Dynamodib, AWS Landa, and Amazon Sage Maker. In the Sagemaker Pipeline AWS step Function, you can organize end-to-end machine learning workflows that include data pre-processing, post-processing, feature engineering, data validation, and sample evaluation on Amazon SageMaker.👉 Introducing DataCentric Approach with Apache Airflow, it helps multiple Amazon SageMaker operators, whom are available with Airflow, including model training, hyperparameter tuning, model deployment, and batch transform. This allows you to use the same orchestration tool to manage ML workflows with tasks running on Amazon SageMaker.👉 ML software development life cycle (MLSDLC) introducing the six-phase flow of this : Plan ➡️ Design ➡️ Build ➡️ Test ➡️ Deploy ➡️ Maintain
Amazon Verified review Amazon
Guangping zhang May 05, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Automated machine learning (AutoML) is more and more important, several automated ML services or libraries including AWS AutoML appeared recently. This book (Automated MachineLearning on AWS) is about AWS autoML, it fully introduced the most important applications of AWS autoML.This book first introduces a Continuous Integration and Continuous Delivery (CI/CD) methodology. Then, it uses chapters to introduce automating the ML Process and how to build ML workflow using Apache Airflow and Amazon Managed Workflows.At last, the book introduces ML Software development life cycle (MLSDLC) and the application.I think It's a very good book for the customers who are interested in learning AWS automated machine learning.
Amazon Verified review Amazon
Devanshu Jul 25, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a remarkable book that can help all data science enthusiasts and AWS practitioners. This will help to a larger audience who works and is willing to work either as an ML engineer or the Could practitioners. This is a must to have a look book.
Amazon Verified review Amazon
Sireesha Muppala Apr 16, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an awesome book for anyone looking to move beyond the ML model development basics and operationalize machine learning to achieve business value. Author's experience working with various customers comes through as he expertly discusses ML theory, business use cases and takes the reader on a journey of various tools to automate building an ML application. The GitHub repository accompanying the book is a great resource to gain hands-on expertise of the concepts covered. As the ML field continues to evolve, the timeless ideas covered such as automation and CI/CD will help organizations deliver repeated business value.
Amazon Verified review Amazon
Amazon Customer Oct 12, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Before commenting on the contents of Automated Machine Learning on AWS, I must express how much I have to appreciate the fact that this book was newly published. As web services like AWS change the UIs or features of some of its services fairly frequently, this book’s examples manage to provide accurate hands-on instructions to toggling the relevant AWS services. Content-wise, the book is structured in a progressive manner. It first introduces how to perform CRISP-DM methodology using AutoPilot and AutoGluon, and introduces their pros and cons. Then, in section 2, it provides solutions to address the cons of the solutions mentioned in section 1, along with introducing the concepts and hands-ons for CI/CD methodology. As the topics progress further, more AWS solutions are mentioned tailored to different DS/ML development styles. So far I’ve read over half of the book and have felt more confident exercising my MLOps in the workplace. I would recommend this book for DS and MLE who wants to explore more cloud solutions for end-to-end ML projects. However, it might be a bit challenging for readers who have no prior experience with AWS and the common ML deployment tools such as Docker and Kubernetes. Fortunately, the author has kindly provided links whenever there are new AWS services or concepts being introduced, to facilitate our learning. It is daunting to learn ML deployment, but with the help of Automated Machine Learning on AWS, the journey will be easier.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.