Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
50 Algorithms Every Programmer Should Know

You're reading from   50 Algorithms Every Programmer Should Know Tackle computer science challenges with classic to modern algorithms in machine learning, software design, data systems, and cryptography

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781803247762
Length 538 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Imran Ahmad Imran Ahmad
Author Profile Icon Imran Ahmad
Imran Ahmad
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Section 1: Fundamentals and Core Algorithms FREE CHAPTER
2. Overview of Algorithms 3. Data Structures Used in Algorithms 4. Sorting and Searching Algorithms 5. Designing Algorithms 6. Graph Algorithms 7. Section 2: Machine Learning Algorithms
8. Unsupervised Machine Learning Algorithms 9. Traditional Supervised Learning Algorithms 10. Neural Network Algorithms 11. Algorithms for Natural Language Processing 12. Understanding Sequential Models 13. Advanced Sequential Modeling Algorithms 14. Section 3: Advanced Topics
15. Recommendation Engines 16. Algorithmic Strategies for Data Handling 17. Cryptography 18. Large-Scale Algorithms 19. Practical Considerations 20. Other Books You May Enjoy
21. Index

Rectified linear unit (ReLU)

The output for the first two activation functions presented in this chapter was binary. That means that they will take a set of input variables and convert them into binary outputs. ReLU is an activation function that takes a set of input variables as input and converts them into a single continuous output. In neuralneural networks, ReLU is the most popular activation function and is usually used in the hidden layers, where we do not want to convert continuous variables into category variables.The following diagram summarizes the ReLU activation function:

Figure 8.12: Rectified linear unit

Note that when x≤ 0, that means y = 0. This means that any signal from the input that is zero or less than zero is translated into a zero output:

Shape Description automatically generated with medium confidence
 for 
 for
Shape Description automatically generated with medium confidence

As soon as x becomes more than zero, it is x.The ReLU function is one of the most used activation functions in neural neural networks. It can...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime