Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Deep Learning - Third Edition

You're reading from  Python Deep Learning - Third Edition

Product type Book
Published in Nov 2023
Publisher Packt
ISBN-13 9781837638505
Pages 362 pages
Edition 3rd Edition
Languages
Concepts
Author (1):
Ivan Vasilev Ivan Vasilev
Profile icon Ivan Vasilev
Toc

Table of Contents (17) Chapters close

Preface 1. Part 1:Introduction to Neural Networks
2. Chapter 1: Machine Learning – an Introduction 3. Chapter 2: Neural Networks 4. Chapter 3: Deep Learning Fundamentals 5. Part 2: Deep Neural Networks for Computer Vision
6. Chapter 4: Computer Vision with Convolutional Networks 7. Chapter 5: Advanced Computer Vision Applications 8. Part 3: Natural Language Processing and Transformers
9. Chapter 6: Natural Language Processing and Recurrent Neural Networks 10. Chapter 7: The Attention Mechanism and Transformers 11. Chapter 8: Exploring Large Language Models in Depth 12. Chapter 9: Advanced Applications of Large Language Models 13. Part 4: Developing and Deploying Deep Neural Networks
14. Chapter 10: Machine Learning Operations (MLOps) 15. Index 16. Other Books You May Enjoy

Introducing LLMs

In this section, we’ll take a more systematic approach and dive deeper into transformer-based architectures. As we mentioned in the introduction, the transformer block has changed remarkedly little since its introduction in 2017. Instead, the main advances have come in terms of larger models and larger training sets. For example, the original GPT model (GPT-1) has 117M parameters, while GPT-3 (Language Models are Few-Shot Learners, https://arxiv.org/abs/2005.14165) has 175B, a thousandfold increase. We can distinguish two informal transformer model categories based on size:

  • Pre-trained language models (PLMs): Transformers with fewer parameters, such as Bidirectional Encoder Representations from Transformers (BERT) and generative pre-trained transformers (GPT), fall into this category. Starting with BERT, these transformers introduced the two-step pre-training/FT paradigm. The combination of the attention mechanism and unsupervised pre-training (masked...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime