Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Science with .NET and Polyglot Notebooks

You're reading from   Data Science with .NET and Polyglot Notebooks Programmer's guide to data science using ML.NET, OpenAI, and Semantic Kernel

Arrow left icon
Product type Paperback
Published in Aug 2024
Publisher Packt
ISBN-13 9781835882962
Length 404 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Matt Eland Matt Eland
Author Profile Icon Matt Eland
Matt Eland
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1: Data Analysis in Polyglot Notebooks
2. Chapter 1: Data Science, Notebooks, and Kernels FREE CHAPTER 3. Chapter 2: Exploring Polyglot Notebooks 4. Chapter 3: Getting Data and Code into Your Notebooks 5. Chapter 4: Working with Tabular Data and DataFrames 6. Chapter 5: Visualizing Data 7. Chapter 6: Variable Correlations 8. Part 2: Machine Learning with Polyglot Notebooks and ML.NET
9. Chapter 7: Classification Experiments with ML.NET AutoML 10. Chapter 8: Regression Experiments with ML.NET AutoML 11. Chapter 9: Beyond AutoML: Pipelines, Trainers, and Transforms 12. Chapter 10: Deploying Machine Learning Models 13. Part 3: Exploring Generative AI with Polyglot Notebooks
14. Chapter 11: Generative AI in Polyglot Notebooks 15. Chapter 12: AI Orchestration with Semantic Kernel 16. Part 4: Polyglot Notebooks in the Enterprise
17. Chapter 13: Enriching Documentation with Mermaid Diagrams 18. Chapter 14: Extending Polyglot Notebooks 19. Chapter 15: Adopting and Deploying Polyglot Notebooks 20. Index 21. Other Books You May Enjoy

Understanding RAG and AI orchestration

The process of training an LLM involves taking a massive amount of training data and building a mapping of how different tokens in the source data are associated with each other.

Because the training process uses the process of self-attention, the resulting models are able to have a larger context of the relationship between different words in sentences and even different sentences in a paragraph.

The model training process requires large scales of both data and computing resources – typically many graphical processing units (GPUs) – and a significant amount of time. This process is illustrated in Figure 12.1:

Figure 12.1 – Training an LLM on source data

Figure 12.1 – Training an LLM on source data

As we’ve seen, these trained LLMs are very powerful, but they’re limited to the data used in the training process.

This means that a model trained on news would not include any news stories published after the model began...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime