Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Machine Learning on Kubernetes
Machine Learning on Kubernetes

Machine Learning on Kubernetes: A practical handbook for building and using a complete open source machine learning platform on Kubernetes

Arrow left icon
Profile Icon Faisal Masood Profile Icon Brigoli
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.1 (10 Ratings)
Paperback Jun 2022 384 pages 1st Edition
eBook
€21.99 €31.99
Paperback
€39.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Faisal Masood Profile Icon Brigoli
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.1 (10 Ratings)
Paperback Jun 2022 384 pages 1st Edition
eBook
€21.99 €31.99
Paperback
€39.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€21.99 €31.99
Paperback
€39.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Machine Learning on Kubernetes

Chapter 1: Challenges in Machine Learning

Many people believe that artificial intelligence (AI) is all about the idea of a humanoid robot or an intelligent computer program that takes over humanity. The shocking news is that we are not even close to this. A better term for such incredible machines is human-like intelligence or artificial general intelligence (AGI).

So, what is AI? A more straightforward answer would be a system that uses a combination of data and algorithms to make predictions. AI practitioners call it machine learning or ML. A particular subset of ML algorithms, called deep learning (DL), refers to an ML algorithm that uses a series of steps, or layers, of computation (Goodfellow, Bengio, and Courville, 2017). This technique employs deep neural networks (DNNs) with multiple layers of artificial neurons that mimic the architecture of the human brain. Though it sounds complicated enough, it does not always mean that all DL systems will have a better performance compared to other AI algorithms or even a traditional programming approach.

ML is not always about DL. Sometimes, a basic statistical model may be a better fit for a problem you are trying to solve than a complex DNN. One of the challenges of implementing ML is about selecting the right approach. Moreover, delivering an ML project comes with other challenges, not only on the business and technology side but also in people and processes. These challenges are the primary reasons why most ML initiatives fail to deliver their expected value.

In this chapter, we will revisit a basic understanding of ML and understand the challenges in delivering ML projects that can lead to a project not delivering its promised value.

The following topics will be covered:

  • Understanding ML
  • Delivering ML value
  • Choosing the right approach
  • Facing the challenges of adopting ML
  • An overview of the ML platform

Understanding ML

In traditional computer programming, a human programmer must write a clear set of instructions in order for a computer program to perform an operation or provide an answer to a question. In ML, however, a human (usually an ML engineer or data scientist) uses data and an algorithm to determine the best set of parameters for a model to yield answers or predictions that are usable. While traditional computer programs provide answers using exact logic (Yes/No, Correct/Wrong), ML algorithms involve fuzziness (Yes/Maybe/No, 80% certain, Not sure, I do not know, and so on).

In other words, ML is a technique for solving problems by using data along with an algorithm, statistical model, or a neural network, to infer or predict the desired answer to a question. Instead of explicitly writing instructions on how to solve a problem, we use a bunch of examples and let the algorithm figure out the best way (the best set of parameters) to solve the problem. ML is useful when it is impossible or extremely difficult to write a set of instructions to solve a problem. A typical example problem where ML shines is computer vision (CV). Though it is easy for any normal human to identify a cat, it is impossible or extremely difficult to manually write code to identify if a given image is of a cat or not. If you are a programmer, try thinking about how you would write this code without ML. This is a good mental exercise.

The following diagram illustrates where DL and ML sit in terms of AI:

Figure 1.1 – Relationship between AI, ML, and DL

Figure 1.1 – Relationship between AI, ML, and DL

AI is a broad subject covering any basic, rule-based agent system that can replace a human operator, ML, and DL. But ML alone is another broad subject. It covers several algorithms, from basic linear regression to very deep convolutional neural networks (CNNs). In traditional programming, no matter which language or framework we use, the process of developing and building applications is the same. In contrast, ML has a wide variety of algorithms, and sometimes, they require a vastly different approach to utilize and build models from. For example, a generative adversarial network (GAN), which is an architecture used in many creative ML models to generate fake human faces, is trained differently to a basic decision tree model.

Because of the nature of ML projects, some practices in software engineering may not always apply to ML, and some practices, processes, and tools that are not present in traditional programming must be invented.

Delivering ML value

There are many books, videos, and lectures available on ML and its related topics. In this book, we will cover a more adaptive approach and show how open source software (OSS) can provide the basis for you and your organization to benefit from the AI revolution.

In later chapters, we will tackle the challenges behind operationalizing ML projects by deploying and using an open source toolchain on Kubernetes. Toward the end of the book, we will build a reusable ML platform that provides essential features that will help contribute to delivering a successful ML project.

Before we dig deeper into the software, we must have foundational knowledge, and we must know the practical steps required to successfully deliver business value with ML initiatives. With this knowledge, we will be able to address some of the challenges of implementing an ML platform and identify how they will help deliver the expected value from our ML projects. The primary reason why these promised values are not realized is that they don't get to production. For example, imagine you built an excellent ML model that predicts the outcome of football World Cup matches, but no one could use it during the tournament. As a result, even though the model is successful, it failed to deliver its expected business value. Most organization's AI and ML initiatives are in the same state. The data science or ML engineering team may have built a perfectly working ML model that could have helped the organization's business and/or its customers; however, these models do not usually get deployed to production. So, what are the challenges teams face that prevent them from putting their ML models into production?

Choosing the right approach

Before deciding to use ML for a given project, understand the problem first and assess if it can be solved by ML. Invest enough time in working with the right stakeholder to see what the expectations are. Some problems may be better suited to traditional approaches, such as when you have predefined business rules for a given system. It is faster and easier to code rules than is it to train a model, plus you do not need a huge amount of data.

While deciding whether to use ML or not, you can think in terms of whether pattern-based results will work for your problem. If you are building a system that reads data from the frequent-flyer database of an airline to find customers to which you want to send a promotion, a rule-based system may also give you good and acceptable results. An ML-based system may give you better matches for certain scenarios, but will the time spent on building this system be worth it?

The importance of data

The efficiency of your ML model depends on the quality and accuracy of the data, but unfortunately, data collection and processing activities do not get the attention they deserve, which proves costly in later stages of the project in terms of the model not being suitable enough for the given task.

"Everyone wants to do the model work, not the data work."

– Data Cascades in High-Stakes AI, Sambasivan et al. (see the Further reading section)

The paper cited here discusses this challenge. An interesting example quoted in the paper is of a team building a model to detect a particular pattern from patient scans, which works brilliantly with test data. However, the model failed in production because the scans being fed onto the model contained tiny dust particles, resulting in the inferior performance of the model. This example is a classic case of a team being focused on model building and not on how it will be used in the real world.

One thing that teams should put focus on is data validation and cleansing. Many times, data is often missing or is not correct—for example, a string field in a number column, different date formats in the same field, or the same identifier (ID) for different records if the records come from different systems. All this data anomaly may result in an inefficient model that will lead to inferior performance.

Once you've been through this process and come to the decision that yes, ML is the way to go… what next?

Facing the challenges of adopting ML

Organizations are eager to adopt ML to drive their business growth. In many projects, the teams become too focused on technical brilliance while not delivering the business value expected from the ML initiative. This can cause early failures that may result in reduced investment for future projects. These are the two main challenges that businesses are facing in making ML mainstream in all the various parts of the business, as outlined here:

  • Keeping the focus on the big picture
  • Siloed teams

Focusing on the big picture

The first challenge organizations face is building an ecosystem where ML models create value for the business. The challenging part is that teams often do not focus on all aspects of a project and instead focus only on specific areas, resulting in poor value for the business.

How many organizations that we know of are successful in their ML journey? Beyond the Googles, Metas (formerly Facebook), and Netflixs of the world, there are few success stories. The number one reason is that the teams put focus just on building the model. So, what else is there beyond the algorithm? Google published a paper about the hidden technical debt in ML projects (see the Further reading section at the end of this chapter), and it provides a good summary of things that we need to consider to be successful.

Have a look at the following diagram:

Figure 1.2 – The components of an ML system

Figure 1.2 – The components of an ML system

Can you see the small block in Figure 1.2? The block in the picture captioned ML is the ML model development part, and you can see that there are a lot more processes involved in ML projects. Let's understand a few of them, as follows:

  • Data collection and data verification: To have a reliable and trustworthy model, we need a good set of data. ML is all about finding patterns in the data and predicting unseen data results using those patterns. Therefore, the better the quality of your data, the better your model will perform. The data, however, comes in all shapes and sizes. Some of it may reside in files, some in proprietary databases; a dataset may come from data streams, and some data may need to be harvested from Internet of Things (IoT) devices. On top of that, the data may be owned by different teams with different security and regulatory requirements. Therefore, you need to think about technologies that allow you to collect, transform, and process data from various sources and in a variety of formats.
  • Feature extraction and analysis: Often, assumptions about data quality and completeness are incorrect. Data science teams perform an activity called exploratory data analysis (EDA) in which they read and process data from various sources as fast as they can. Teams further improve their understanding of the data before they invest time in processing the data at scale and going to the model-building stage. Think about how your team or organization can facilitate the data exploration to speed up your ML journey.

Data analysis leads to a better understanding of data, but feature extraction is another thing. This is a process of identifying, through experiments, a set of data attributes that influences the accuracy of the model output and identifying which attributes are considered irrelevant or considered noise. For example, in an ML model that classifies if a bank transaction is fraudulent or not, the name of the account holder is considered to be irrelevant, or noise, while the amount of the transaction could be an important feature. The output of this process is a transformed version of the dataset that contains only relevant features and is formatted for consumption in the ML model training process or fitness function. This is sometimes called a feature set. Teams need a tool for performing such analysis and transforming data into a format that is consumable for model training. Data collection, feature extraction, and analysis are also collectively called feature engineering (FE).

  • Infrastructure, monitoring, and resource management: You need computers to process and explore data, build and train your models, and deploy ML models for consumption. All these activities need processing power and storage capacity, at the lowest possible cost. Think about how your team will get access to hardware resources on-demand and in a self-service fashion. You need to plan how data scientists and engineers will be able to request the required resources in the fastest manner. At the same time, you still need to be able to follow your organization's policies and procedures. You also need system monitoring to optimize resource utilization and improve the operability of your ML platform.
  • Model development: Once you have data available in the form of consumable features, you need to build your models. Model building requires many iterations with different algorithms and different parameters. Think about how to track the outcomes of different experiments and where to store your models. Often, different teams can reuse each other's work to increase the velocity of the teams further. Think about how teams can share their findings. Teams must have a tool that can facilitate model training and experiment runs, record model performance and experiment metadata, store models, and manage the tagging of models and promotion to an acceptable and deployable state.
  • Process management: As you see, there are a lot of things to be done to make a useful model. Think about the processes of automating model deployment and monitoring processes. Different personas would be working on different things such as data tasks, model tasks, infrastructure tasks, and more. The team needs to collaborate and share to achieve a particular outcome. The real world keeps on changing: once your model is deployed into production, you may need to retrain your model with new data regularly. All these activities need well-defined processes and automated stages so that the team can continue working on high-value tasks.

In summary, you will need an ecosystem that can provide solution components for all of the following building blocks. This single platform will increase the team's velocity via consistent experience within the team for all the needs of an ML system:

  • Fetching, storing, and processing data
  • Training, tuning, and tracking models
  • Deploying and monitoring models
  • Automating repetitive tasks, such as data processing and model deployment

But how can we make different teams collaborate and use a common platform to do their tasks?

Breaking down silos

To complete an ML project, you need to have a team that comprises various roles. However, with diverse roles, there comes a challenge of communication, team dynamics, and conflicting priorities. In enterprises, these roles often belong to different teams in different business units (BUs).

ML projects need a variety of teams and personas to be successful. The following screenshot shows some of the roles and responsibilities that are required to complete a simple ML project:

Figure 1.3 – Silos involved in ML projects

Figure 1.3 – Silos involved in ML projects

Let's look at these roles in more detail here:

  • Data scientist: This role is the most understood one. This persona or team is responsible for exploring the data and running experiment iterations to determine which algorithm is suitable for a given problem.
  • Data engineers: The persona or team in this role is responsible for ingesting data from various sources, cleaning the data, and making it useful for the data science teams.
  • Developers and operations: Once the model is built, this team is responsible for taking the model and deploying it to be used. The operations team is responsible for making sure that computers and storage are available for the other teams to perform data processing, model life-cycle operations, and model inference.
  • A business subject-matter expert (SME): Even though data scientists build the ML model, understanding data and the business domain is critical to building the right model. Imagine a data scientist who is building a model for predicting COVID-19 without understanding the different parameters. An SME, which would be a medical doctor in this case, would be required to assist the data scientists in understanding data before going on to the model-building phase.

Of course, even with the building blocks in place, you're unlikely to succeed at the first attempt.

Fail-fast culture

Building a cross-functional team is not enough. Make sure that the team is empowered to make its own decisions and feels comfortable experimenting with different approaches. The data and ML fields are fast-moving, and the team may choose to adapt a recent technology or process or let go of an existing one based on the given success criteria.

Form a team of people who are passionate about the work, and when you give them autonomy, you will have the best possible outcome. Enable your teams so that they can adapt to change quickly and deliver value for your business. Establish an iterative and fast feedback cycle where teams receive feedback on work that has been delivered so far. A quick feedback loop will put more focus on solving the business problem.

However, this approach brings its own challenges. Adopting modern technologies may be difficult and time-consuming. Think of Amazon Marketplace: if you want to sell some new hot thing, by using Amazon Marketplace, you can bring your product to market faster because the marketplace takes care of a lot of moving parts required to make a sale. The ML platform you will learn about in this book enables you to experiment with modern approaches and modern technologies with ease by supplying basic common services and sandbox environments for your team to experiment fast.

It is critical to the success of projects that teams that belong to distinct groups form a cross-functional and autonomous team. This new team will move with higher velocity without internal friction and avoid tedious processes and delays. It is critical that the cross-functional team is empowered to drive its own decisions and be supported with self-serving platforms so it can work in an independent manner. The ML platform you will see in this book will provide the basis of one such platform where teams can collaborate and share.

Now, let's take a look at what kind of platform will help you address the challenges we have discussed.

An overview of the ML platform

In this section, we will talk about the capabilities of the ML platform that you will need to consider. The aim is to make you aware of the basic building blocks that could form an ecosystem for your team to help you in your ML journey. An ML platform can be thought of as a set of components that assists in the faster development and deployment of ML models and data pipelines.

There are three main characteristics of an ML platform, as outlined here:

  • A complete ecosystem: The platform should provide an end-to-end (E2E) solution that includes data life-cycle management, ML life-cycle management, application life-cycle management, and observability.
  • Built on open standards: The platform should provide a way to extend and build on the existing baseline. Because the field is fast-moving, it is critical that you can further enhance, tailor, and optimize platforms for your specific needs.
  • Self-serving: The platform should be able to provide the resources required by teams automatically and on-demand, from hardware requests to deploying software in production. The platform automates the provisioning of resources based on enterprise controls and recovers them once the job is completed. The resources can be central processing units (CPUs), memory, or disk, or can be software such as integrated development environments (IDEs) to write code or a combination of these.

The following diagram shows the various components of an ML platform that serves different personas, allowing them to collaborate on a common platform:

Figure 1.4 – Personas and their interaction with the platform

Figure 1.4 – Personas and their interaction with the platform

Apart from the characteristics presented in Figure 1.4, the platform must have the following technical capabilities:

  • Workflow automation: The platform should have some form of workflow automation capability where both data engineers can create jobs that perform repetitive tasks such as data ingestion and preparation and data scientists can orchestrate model training and automate model deployments.
  • Security: The platform must be secured to prevent data leaks and data loss that can have a negative impact on the business.
  • Observability: We do not want to run applications without observability, whether it is a traditional application or an ML model. Deploying applications in production without observability is like riding a bike blindfolded. The platform should have a good amount of observability where you can monitor the health and performance of the entire system or sub-system in near real time. This should also include an alerting capability.
  • Logging: Logging plays a key role in understanding what happened when systems start behaving in an unexpected way. The platform must have a solid logging mechanism to allow operations teams to better support the ML project.
  • Data processing and pipelining: Because ML projects rely on a huge amount of data, the platform must include a reliable fully featured data processing and data pipelining solution that can scale horizontally.
  • Model packaging and deployment: Not all data scientists are experienced software engineers. Although some may have an experience in writing applications, it is not safe to assume that all data scientists can write production-grade applications and deploy them to production. Therefore, the platform must be able to automatically package an ML model into an application and serve it.
  • ML life cycle: The platform must also be capable of managing ML experiments, tracking performance, storing training and experiment metadata and feature sets, and versioning models. This not only allows data scientists to work efficiently, but also allows them to work collaboratively.
  • On-demand resource allocation: One important feature an ML platform should have is the capability that allows data scientists and data engineers to provision their own runtime resources automatically and on-demand. This eliminates the need for manual requisition of resources and eliminates time wasted on waiting and handovers with operations teams. The platform must allow platform users to create their own environment and to allocate the right amount of compute resources they need to do their jobs.

There are already platform products that have most, if not all, of the capabilities you have just learned about. What you will learn in the later chapters of this book is how to build one such platform based on OSS on top of Kubernetes.

Summary

Even though ML is not new, recent advancements in relatively cheap computing power have allowed many companies to start investing in it. This widespread availability of hardware comes with its own challenges. Often, teams do not put the focus on the big picture, and that may result in ML initiatives not delivering the value they promise.

In this chapter, we have discussed two common challenges that enterprises face while going through their ML journey. The challenges span from the technology adoption to the teams and how they collaborate. Being successful with your ML journey will require time, effort, and practice. Expect it to be more than just a technology change. It will require changing and improving the way you collaborate and use technology. Make your team autonomous and prepare it to adapt to changes, enable a fail-fast culture, invest in technology, and always keep an eye on the business outcome.

We have also discussed some of the important attributes of an E2E ML platform. We will talk about this topic in-depth in the later parts of this book.

In the next chapter, we will introduce an emerging concept in ML projects, ML operations (MLOps). Through this, the industry is trying to bring the benefits of software engineering practices to ML projects. Let's dig in.

Further reading

If you want to learn more about the challenges in machine learning, you might be interested in the following articles as well.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build a complete machine learning platform on Kubernetes
  • Improve the agility and velocity of your team by adopting the self-service capabilities of the platform
  • Reduce time-to-market by automating data pipelines and model training and deployment

Description

MLOps is an emerging field that aims to bring repeatability, automation, and standardization of the software engineering domain to data science and machine learning engineering. By implementing MLOps with Kubernetes, data scientists, IT professionals, and data engineers can collaborate and build machine learning solutions that deliver business value for their organization. You'll begin by understanding the different components of a machine learning project. Then, you'll design and build a practical end-to-end machine learning project using open source software. As you progress, you'll understand the basics of MLOps and the value it can bring to machine learning projects. You will also gain experience in building, configuring, and using an open source, containerized machine learning platform. In later chapters, you will prepare data, build and deploy machine learning models, and automate workflow tasks using the same platform. Finally, the exercises in this book will help you get hands-on experience in Kubernetes and open source tools, such as JupyterHub, MLflow, and Airflow. By the end of this book, you'll have learned how to effectively build, train, and deploy a machine learning model using the machine learning platform you built.

Who is this book for?

This book is for data scientists, data engineers, IT platform owners, AI product owners, and data architects who want to build their own platform for ML development. Although this book starts with the basics, a solid understanding of Python and Kubernetes, along with knowledge of the basic concepts of data science and data engineering will help you grasp the topics covered in this book in a better way.

What you will learn

  • Understand the different stages of a machine learning project
  • Use open source software to build a machine learning platform on Kubernetes
  • Implement a complete ML project using the machine learning platform presented in this book
  • Improve on your organization s collaborative journey toward machine learning
  • Discover how to use the platform as a data engineer, ML engineer, or data scientist
  • Find out how to apply machine learning to solve real business problems

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 24, 2022
Length: 384 pages
Edition : 1st
Language : English
ISBN-13 : 9781803241807
Vendor :
Red Hat
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jun 24, 2022
Length: 384 pages
Edition : 1st
Language : English
ISBN-13 : 9781803241807
Vendor :
Red Hat
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 149.97
Machine Learning with PyTorch and Scikit-Learn
€41.99
Machine Learning on Kubernetes
€39.99
Transformers for Natural Language Processing
€67.99
Total 149.97 Stars icon

Table of Contents

15 Chapters
Part 1: The Challenges of Adopting ML and Understanding MLOps (What and Why) Chevron down icon Chevron up icon
Chapter 1: Challenges in Machine Learning Chevron down icon Chevron up icon
Chapter 2: Understanding MLOps Chevron down icon Chevron up icon
Chapter 3: Exploring Kubernetes Chevron down icon Chevron up icon
Part 2: The Building Blocks of an MLOps Platform and How to Build One on Kubernetes Chevron down icon Chevron up icon
Chapter 4: The Anatomy of a Machine Learning Platform Chevron down icon Chevron up icon
Chapter 5: Data Engineering Chevron down icon Chevron up icon
Chapter 6: Machine Learning Engineering Chevron down icon Chevron up icon
Chapter 7: Model Deployment and Automation Chevron down icon Chevron up icon
Part 3: How to Use the MLOps Platform and Build a Full End-to-End Project Using the New Platform Chevron down icon Chevron up icon
Chapter 8: Building a Complete ML Project Using the Platform Chevron down icon Chevron up icon
Chapter 9: Building Your Data Pipeline Chevron down icon Chevron up icon
Chapter 10: Building, Deploying, and Monitoring Your Model Chevron down icon Chevron up icon
Chapter 11: Machine Learning on Kubernetes Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.1
(10 Ratings)
5 star 60%
4 star 10%
3 star 10%
2 star 20%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




aly Sep 21, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am coming from a strong k8s background and I wanted to transfer and bridge this knowledge with ML. This book gives a very practical guide to fill in the gaps and give me the needed MLOps practical and theoretical knowledge to use k8s for ML deployments on production.I highly recommend it for readers with k8s background and also who want to learn the modern way to deploy and build ML models in the public cloud such as AWS.
Amazon Verified review Amazon
Daniel Sullivan Sep 12, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If you are building and deploying ML models in Kubernetes, this is a great place to start. The book does a good job of providing an overview of MLOps as data engineering while also introducing ML concepts like feature engineering and model evaluation. These provide the foundation for detailed explanations of how to install and use ML platform components, like Jupyter Notebooks. Apache Airflow, MLFlow, and Spark. I especially like the details on setting up Keycloak for authentication, which isn't strictly an ML component but the lack of an authentication system can be a blocker to production deployments.The book covers a lot of ground but it is well organized and the extensive use of screen shots and diagrams complement the text.
Amazon Verified review Amazon
Guangping zhang Jul 28, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is about machine learning on KubernetesThe book contain a lots of contents about advanced progress on machine learning:MLOps is the convergence of ML, DevOps, and data engineering disciplines that focus on running ML in production.Kubernetes, Kubernetes-based software to run anywhere, from small on-premises data centers to large cloud platforms AWS, GCP and Azure.This capability will give you the portability to run your ML platform anywhere you want.Kubeflow is a machine learning toolkit that provides a pipeline solution called Kubeflow PipelinesThis book discuss both the Data engineering pipeline and ML engineering pipeline, introduce Using MLFlow as an experiment tracking system, Using MLFlow as a model registry systemAutomation is the most popular field now, the model deployment and monitoring need to be automated to increase the efficiency.The GKE platform can package, deploy and automate the model onto the platform, so You can automate all these steps using the workflow engine provided by the platform.GKE also can monitor your model performance and decide whether the model needs retraining on the new dataset not.This book also introduces Docker and containers very well.Recently GKE was updated to Vertex AI, unfortunately this book doesn't mention this update, but the reader can easily find the knowledge from online.I strongly suggest you read this good book.
Amazon Verified review Amazon
venky Aug 02, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Best value for price and great content. I really enjoyed reading and implementing concepts explained in the book. Most books introduce topics as bits and pieces but miss composing an end to end project with topics they targeted to elaborate, Faisal and Ross did great job in crafting chapters from novice to advanced level and curate the exercises across. I can say that even if you are an experienced engineer in cloud tech with ML area, you will learn good amount of stuff from this book. This book has sparked my interest to learn more about Kubernetes and building an end to end platform, I am gonna publish a blog on ML platform setup. Thanks to Authors for pouring their experience and crafting such good hands-on exercises with great detail.
Amazon Verified review Amazon
Yiqiao Yin Aug 03, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Both authors come from Red Hat and provided a lot of industry experience. I really like the book from the beginning because it starts with motivation and challenges in the industry. I really enjoyed the cloud-agnostic property. It's interesting to see from the author's perspectives how the proposed platform can be contributing factor to the entire MLOps and also deployable from cloud-based platforms. I find this part well written and I have had a lot of fun reading these chapters. Coming from my jobs working with people from different parts of the world, I think it's important that today's software platform is equipped with cloud-based technology so it resonate a lot with me to see that this is covered in the book. I have had a lot of fun and joy reading this book and I highly recommend it to everybody else! Please review the video if you want an in-depth feedback of this book!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.