Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Machine Learning Solutions Architect Handbook

You're reading from   The Machine Learning Solutions Architect Handbook Practical strategies and best practices on the ML lifecycle, system design, MLOps, and generative AI

Arrow left icon
Product type Paperback
Published in Apr 2024
Publisher Packt
ISBN-13 9781805122500
Length 602 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
David Ping David Ping
Author Profile Icon David Ping
David Ping
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Navigating the ML Lifecycle with ML Solutions Architecture 2. Exploring ML Business Use Cases FREE CHAPTER 3. Exploring ML Algorithms 4. Data Management for ML 5. Exploring Open-Source ML Libraries 6. Kubernetes Container Orchestration Infrastructure Management 7. Open-Source ML Platforms 8. Building a Data Science Environment Using AWS ML Services 9. Designing an Enterprise ML Architecture with AWS ML Services 10. Advanced ML Engineering 11. Building ML Solutions with AWS AI Services 12. AI Risk Management 13. Bias, Explainability, Privacy, and Adversarial Attacks 14. Charting the Course of Your ML Journey 15. Navigating the Generative AI Project Lifecycle 16. Designing Generative AI Platforms and Solutions 17. Other Books You May Enjoy
18. Index

Bringing it all together

Having delved into the various technical components separately within the generative AI technical stack, let’s now consolidate them into a unified perspective.

Figure 16.8: Generative AI tech stack

In summary, a generative AI platform is an extension of an ML platform by introducing additional capabilities such as prompt management, input/output filtering, and tools for FM evaluation and RLHF workflows. To accommodate these enhancements, the ML platform’s pipeline capability will need to include new generative AI workflows. The new RAG infrastructure will form the foundational backbone of RAG-based LLM applications and will be closely integrated with the underlying generative AI platform.

The development of generative AI applications will continue to leverage other core application architecture components, including streaming, batch processing, message queuing, and workflow tools.

Although many of the core components will...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image