Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Applying Math with Python

You're reading from   Applying Math with Python Over 70 practical recipes for solving real-world computational math problems

Arrow left icon
Product type Paperback
Published in Dec 2022
Publisher Packt
ISBN-13 9781804618370
Length 376 pages
Edition 2nd Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Sam Morley Sam Morley
Author Profile Icon Sam Morley
Sam Morley
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: An Introduction to Basic Packages, Functions, and Concepts 2. Chapter 2: Mathematical Plotting with Matplotlib FREE CHAPTER 3. Chapter 3: Calculus and Differential Equations 4. Chapter 4: Working with Randomness and Probability 5. Chapter 5: Working with Trees and Networks 6. Chapter 6: Working with Data and Statistics 7. Chapter 7: Using Regression and Forecasting 8. Chapter 8: Geometric Problems 9. Chapter 9: Finding Optimal Solutions 10. Chapter 10: Improving Your Productivity 11. Index 12. Other Books You May Enjoy

Using gradient descent methods in optimization

In the previous recipe, we used the Nelder-Mead simplex algorithm to minimize a non-linear function containing two variables. This is a fairly robust method that works even if very little is known about the objective function. However, in many situations, we do know more about the objective function, and this fact allows us to devise faster and more efficient algorithms for minimizing the function. We can do this by making use of properties such as the gradient of the function.

The gradient of a function of more than one variable describes the rate of change of the function in each of its component directions. This is a vector of the partial derivatives of the function with respect to each of the variables. From this gradient vector, we can deduce the direction in which the function is increasing most rapidly and, conversely, the direction in which the function is decreasing most rapidly from any given position. This gives us the basis...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image