Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Enhancing Deep Learning with Bayesian Inference

You're reading from   Enhancing Deep Learning with Bayesian Inference Create more powerful, robust deep learning systems with Bayesian deep learning in Python

Arrow left icon
Product type Paperback
Published in Jun 2023
Publisher Packt
ISBN-13 9781803246888
Length 386 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (3):
Arrow left icon
Jochem Gietema Jochem Gietema
Author Profile Icon Jochem Gietema
Jochem Gietema
Marian Schneider Marian Schneider
Author Profile Icon Marian Schneider
Marian Schneider
Matt Benatan Matt Benatan
Author Profile Icon Matt Benatan
Matt Benatan
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Chapter 1: Bayesian Inference in the Age of Deep Learning 2. Chapter 2: Fundamentals of Bayesian Inference FREE CHAPTER 3. Chapter 3: Fundamentals of Deep Learning 4. Chapter 4: Introducing Bayesian Deep Learning 5. Chapter 5: Principled Approaches for Bayesian Deep Learning 6. Chapter 6: Using the Standard Toolbox for Bayesian Deep Learning 7. Chapter 7: Practical Considerations for Bayesian Deep Learning 8. Chapter 8: Applying Bayesian Deep Learning 9. Chapter 9: Next Steps in Bayesian Deep Learning 10. Why subscribe?

5.2 Explaining notation

While we’ve introduced much of the notation used throughout the book in the previous chapters, we’ll be introducing more notation associated with BDL in the following chapters. As such, we’ve provided an overview of the notation here for reference:

  • μ: The mean. To make it easy to cross-reference our chapter with the original Probabilistic Backpropagation paper, this is represented as m when discussing PBP.

  • σ: The standard deviation.

  • σ2: The variance (meaning the square of the standard deviation). To make it easy to cross-reference our chapter with the paper, this is represented as v when discussing PBP.

  • x: A single vector input to our model. If considering multiple inputs, we’ll use X to represent a matrix comprising multiple vector inputs.

  • x: An approximation of our input x.

  • y: A single scalar target. When considering multiple targets, we’ll use y to represent a vector of multiple scalar targets.

  • ŷ:...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image