Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Advanced Deep Learning with TensorFlow 2 and Keras

You're reading from   Advanced Deep Learning with TensorFlow 2 and Keras Apply DL, GANs, VAEs, deep RL, unsupervised learning, object detection and segmentation, and more

Arrow left icon
Product type Paperback
Published in Feb 2020
Publisher Packt
ISBN-13 9781838821654
Length 512 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Rowel Atienza Rowel Atienza
Author Profile Icon Rowel Atienza
Rowel Atienza
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Introducing Advanced Deep Learning with Keras 2. Deep Neural Networks FREE CHAPTER 3. Autoencoders 4. Generative Adversarial Networks (GANs) 5. Improved GANs 6. Disentangled Representation GANs 7. Cross-Domain GANs 8. Variational Autoencoders (VAEs) 9. Deep Reinforcement Learning 10. Policy Gradient Methods 11. Object Detection 12. Semantic Segmentation 13. Unsupervised Learning Using Mutual Information 14. Other Books You May Enjoy
15. Index

4. Actor-Critic method

In the REINFORCE with baseline method, the value is used as a baseline. It is not used to train the value function. In this section, we introduce a variation of REINFORCE with baseline, called the Actor-Critic method. The policy and value networks play the roles of actor and critic networks. The policy network is the actor deciding which action to take given the state. Meanwhile, the value network evaluates the decision made by the actor or policy network.

The value network acts as a critic that quantifies how good or bad the chosen action undertaken by the actor is. The value network evaluates the state value, , by comparing it with the sum of the reward received, , and the discounted value of the observed next state, . The difference, , is expressed as:

(Equation 10.4.1)

where we dropped the subscripts of r and s for simplicity. Equation 10.4.1 is similar to the temporal differencing in Q-learning discussed in Chapter 9, Deep Reinforcement...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image