Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Pretrain Vision and Large Language Models in Python

You're reading from   Pretrain Vision and Large Language Models in Python End-to-end techniques for building and deploying foundation models on AWS

Arrow left icon
Product type Paperback
Published in May 2023
Publisher Packt
ISBN-13 9781804618257
Length 258 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Emily Webber Emily Webber
Author Profile Icon Emily Webber
Emily Webber
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Part 1: Before Pretraining
2. Chapter 1: An Introduction to Pretraining Foundation Models FREE CHAPTER 3. Chapter 2: Dataset Preparation: Part One 4. Chapter 3: Model Preparation 5. Part 2: Configure Your Environment
6. Chapter 4: Containers and Accelerators on the Cloud 7. Chapter 5: Distribution Fundamentals 8. Chapter 6: Dataset Preparation: Part Two, the Data Loader 9. Part 3: Train Your Model
10. Chapter 7: Finding the Right Hyperparameters 11. Chapter 8: Large-Scale Training on SageMaker 12. Chapter 9: Advanced Training Concepts 13. Part 4: Evaluate Your Model
14. Chapter 10: Fine-Tuning and Evaluating 15. Chapter 11: Detecting, Mitigating, and Monitoring Bias 16. Chapter 12: How to Deploy Your Model 17. Part 5: Deploy Your Model
18. Chapter 13: Prompt Engineering 19. Chapter 14: MLOps for Vision and Language 20. Chapter 15: Future Trends in Pretraining Foundation Models 21. Index 22. Other Books You May Enjoy

Preface

So, you want to work with foundation models? That is an excellent place to begin! Many of us in the machine learning community have followed these curious creatures for years, from their earliest onset in the first days of the Transformer models, to their expansion in computer vision, to the near ubiquitous presence of text generation and interactive dialogue we see in the world today.

But where do foundation models come from? How do they work? What makes them tick, and when should you pretrain and fine-tune them? How can you eke out performance gains on your datasets and applications? How many accelerators do you need? What does an end-to-end application look like, and how can you use foundation models to master this new surge of interest in generative AI?

These pages hope to provide answers to these very important questions. As you are no doubt aware, the pace of innovation in this space is truly breathtaking, with more foundation models coming online every day from both open-source and proprietary model vendors. To grapple with this reality, I’ve tried to focus on the most important conceptual fundamentals throughout the book. This means your careful study here should pay off for at least a few more years ahead.

In terms of practical applications and guidance, I’ve overwhelmingly focused on cloud computing options available through AWS and especially Amazon SageMaker. I’ve spent more than the last five years very happily at AWS and enjoy sharing all of my knowledge and experience with you! Please do note that all thoughts and opinions shared in this book are my own, and do not represent those of Amazon’s.

The following chapters focus on concepts, not code. This is because software changes rapidly, while fundamentals change very slowly. You’ll find in the repository with the book links to my go-to resources for all of the key topics mentioned throughout these fifteen chapters, which you can use right away to get hands-on with everything you’re learning here. Starting July 1, 2023, you’ll also find in the repository a set of new pretraining and fine-tuning examples from yours truly to complete all of the topics.

You might find this hard to believe, but in my early twenties I wasn’t actually coding: I was exploring the life of a Buddhist monastic. I spent five years living at a meditation retreat center in Arizona, the Garchen Institute. During this time, I learned how to meditate, focus my mind, watch my emotions and develop virtuous habits. After my master’s degree at the University of Chicago years later, and now at Amazon, I can see that these traits are extremely useful in today’s world as well!

I mention this so that you can take heart. Machine learning, artificial intelligence, cloud computing, economics, application development, none of these topics are straightforward. But if you apply yourself, if you really stretch your mind to consider the core foundations of the topics at hand, if you keep yourself coming back to the challenge again and again, there’s truly nothing you can’t do. That is the beauty of humanity! And if a meditating yogi straight from the deep silence of a retreat hut can eventually learn what it takes to pretrain and fine-tune foundation models, then so can you!

With that in mind, let’s learn more about the book itself!

Note

Most of the concepts mentioned here will be accompanied by scripting examples in the repository starting July 1, 2023. However, to get you started even earlier, you can find a list of resources in the repository today with links to useful hands-on examples elsewhere for demonstration.

lock icon The rest of the chapter is locked
Next Section arrow right
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime