Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applied Machine Learning for Healthcare and Life Sciences using AWS

You're reading from   Applied Machine Learning for Healthcare and Life Sciences using AWS Transformational AI implementations for biotech, clinical, and healthcare organizations

Arrow left icon
Product type Paperback
Published in Nov 2022
Publisher Packt
ISBN-13 9781804610213
Length 224 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Ujjwal Ratan Ujjwal Ratan
Author Profile Icon Ujjwal Ratan
Ujjwal Ratan
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Part 1: Introduction to Machine Learning on AWS
2. Chapter 1: Introducing Machine Learning and the AWS Machine Learning Stack FREE CHAPTER 3. Chapter 2: Exploring Key AWS Machine Learning Services for Healthcare and Life Sciences 4. Part 2: Machine Learning Applications in the Healthcare Industry
5. Chapter 3: Machine Learning for Patient Risk Stratification 6. Chapter 4: Using Machine Learning to Improve Operational Efficiency for Healthcare Providers 7. Chapter 5: Implementing Machine Learning for Healthcare Payors 8. Chapter 6: Implementing Machine Learning for Medical Devices and Radiology Images 9. Part 3: Machine Learning Applications in the Life Sciences Industry
10. Chapter 7: Applying Machine Learning to Genomics 11. Chapter 8: Applying Machine Learning to Molecular Data 12. Chapter 9: Applying Machine Learning to Clinical Trials and Pharmacovigilance 13. Chapter 10: Utilizing Machine Learning in the Pharmaceutical Supply Chain 14. Part 4: Challenges and the Future of AI in Healthcare and Life Sciences
15. Chapter 11: Understanding Common Industry Challenges and Solutions 16. Chapter 12: Understanding Current Industry Trends and Future Applications 17. Index 18. Other Books You May Enjoy

Introducing ML on AWS

AWS puts ML in the hands of every developer, irrespective of their skill level and expertise, so that businesses can adopt the technology quickly and effectively. AWS focuses on removing the undifferentiated heavy lifting in the process of building ML models such as the management of the underlying infrastructure, the scaling of the training and inference jobs, and ensuring high availability of the models. It provides developers with a variety of compute instances and containerized environments to choose from that are purpose-built for the accelerated and distributed computing needed for high-scale ML jobs. AWS has a broad and deep set of ML capabilities for builders that can be connected together, like Lego pieces, to create intelligent applications.

AWS ML services cover the full life cycle of an ML pipeline from data annotation/labeling, data cleansing, feature engineering, model training, deployment, and monitoring. It has purpose-built services for problems in computer vision, natural language processing, forecasting, recommendation engines, and fraud detection, to name a few. It also has options for automatic model creation and no-/low-code options for creating ML models. The AWS ML services are organized into three layers also known as the AWS machine learning stack.

Introducing the AWS ML stack

The following diagram represents the version of the AWS AI/ML services stack as of April 2022.

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

The stack can be used by expert practitioners who want to develop a project within the framework of their choice; data scientists who want to use the end-to-end capabilities of SageMaker; business analysts who can build their own model using Canvas; or application developers with no previous ML skills who can add intelligence to their applications with the help of API calls. The following are the three layers of the AWS AI/ML stack:

  • AI services layer: The AI services layer of the AWS ML stack is the topmost layer of the stack. It consists of services that require minimal knowledge of ML. Sometimes, it comes with a pre-trained model that can be just invoked using APIs from the AWS SDK, the AWS CLI, or the console. In other cases, the services allow you to customize the model by providing your own labeled training dataset so the responses are more appropriate for the problem at hand. In any case, the AI services layer of the AWS AI/ML stack is focused on ease of use. The services are designed for specialized applications in industrial settings, search, business processes, and healthcare. It also comes with a core set of capabilities in the areas of speech, chatbots, vision, and text and documents.
  • ML services layer: The ML services layer is the middle layer of the AWS AI/ML stack. It provides tools for data scientists to perform all the steps of the ML life cycle, such as data cleansing, feature engineering, model training, deployment, and monitoring. It is driven by the core ML platform of AWS known as Amazon SageMaker. SageMaker provides the ability to build a modular containerized environment that interfaces with the AWS compute and storage services seamlessly. It provides its own SDK that has APIs to interact with the service. It removes the complexity from each step of the ML workflow by providing simple-to-use modular capabilities with a choice of deployment architectures and patterns to suit virtually any ML application. It also contains MLOps capabilities to create a reproducible ML pipeline that is easy to maintain and scale. The ML services layer is suited for data scientists who build and train their own models and maintain large-scale models in production environments.
  • ML fameworks and the infrastructure layer: The ML frameworks and infrastructure layer is the bottom layer of the AWS AI/ML stack. The services in this layer are for expert practitioners who can develop using the framework of their choice. It provides a choice for developers and scientists to run their workloads as a managed experience in Amazon SageMaker or run their workloads in a self-managed environment on AWS Deep Learning, Amazon machine images (AMIs), and AWS Deep Learning Containers. The AWS Deep Learning AMI and containers are fully configured with the latest versions of the most popular deep learning frameworks and tools – including PyTorch, MXNet, and TensorFlow. As part of this layer, AWS provides a broad and deep portfolio of compute, networking, and storage infrastructure services with a choice of processors and accelerators to meet your unique performance and budget needs for ML.

Now that we have a good understanding of ML and the AWS ML stack, it is a good time to re-read sections that may not be entirely clear. Also, the chapter introduces concepts of ML, but if you want to dive deeper into any of the concepts touched upon in this chapter, there are several trusted online resources for you to refer to. Let us now summarize the lessons from this chapter and see what’s ahead.

You have been reading a chapter from
Applied Machine Learning for Healthcare and Life Sciences using AWS
Published in: Nov 2022
Publisher: Packt
ISBN-13: 9781804610213
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime