Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
TensorFlow Reinforcement Learning Quick Start Guide
TensorFlow Reinforcement Learning Quick Start Guide

TensorFlow Reinforcement Learning Quick Start Guide: Get up and running with training and deploying intelligent, self-learning agents using Python

eBook
₹799.99 ₹1608.99
Paperback
₹2010.99
Subscription
Free Trial
Renews at ₹800p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

TensorFlow Reinforcement Learning Quick Start Guide

Temporal Difference, SARSA, and Q-Learning

In the previous chapter, we looked at the basics of RL. In this chapter, we will cover temporal difference (TD) learning, SARSA, and Q-learning, which were very widely used algorithms in RL before deep RL became more common. Understanding these older-generation algorithms is essential if you want to master the field, and will also lay the foundation for delving into deep RL. We will therefore spend this chapter looking at examples using these older generation algorithms. In addition, we will also code some of these algorithms using Python. We will not be using TensorFlow for this chapter, as the problems do not involve any deep neural networks under study. However, this chapter will lay the groundwork for more advanced topics that we will cover in the subsequent chapters, and will also be our first coding experience of an RL algorithm...

Technical requirements

Knowledge of the following will help you to better understand the concepts presented in this chapter:

  • Python (version 2 or 3)
  • NumPy
  • TensorFlow (version 1.4 or higher)

Understanding TD learning

We will first learn about TD learning. This is a very fundamental concept in RL. In TD learning, the learning of the agent is attained by experience. Several trial episodes are undertaken of the environment, and the rewards accrued are used to update the value functions. Specifically, the agent will keep an update of the state-action value functions as it experiences new states/actions. The Bellman equation is used to update this state-action value function, and the goal is to minimize the TD error. This essentially means the agent is reducing its uncertainty of which action is the optimal action in a given state; it gains confidence on the optimal action in a given state by lowering the TD error.

Relation between the value functions and state

...

Understanding SARSA and Q-Learning

In this section, we will learn about SARSA and Q-Learning and how can they are coded with Python. Before we go further, let's find out what SARSA and Q-Learning are. SARSA is an algorithm that uses the state-action Q values to update. These concepts are derived from the computer science field of dynamic programming, while Q-learning is an off-policy algorithm that was first proposed by Christopher Watkins in 1989, and is a widely used RL algorithm.

Learning SARSA

SARSA is another on-policy algorithm that was very popular, particularly in the 1990s. It is an extension of TD-learning, which we saw previously, and is an on-policy algorithm. SARSA keeps an update of the state-action value...

Cliff walking and grid world problems

Let's consider cliff walking and grid world problems. First, we will introduce these problems to you, then we will proceed on to the coding part. For both problems, we consider a rectangular grid with nrows (number of rows) and ncols (number of columns). We start from one cell to the south of the bottom left cell, and the goal is to reach the destination, which is one cell to the south of the bottom right cell.

Note that the start and destination cells are not part of the nrows x ncols grid of cells. For the cliff walking problem, the cells to the south of the bottom row of cells, except for the start and destination cells, form a cliff where, if the agent enters, the episode ends with catastrophic fall into the cliff. Likewise, if the agent tries to leave the left, top, or right boundaries of the grid of cells, it is placed back in the...

Summary

In this chapter, we looked at the concept of TD. We also learned about our first two RL algorithms: Q-learning and SARSA. We saw how you can code these two algorithms in Python and use them to solve the cliff walking and grid world problems. These two algorithms give us a good understanding of the basics of RL and how to transition from theory to code. These two algorithms were very popular in the 1990s and early 2000s, before deep RL gained prominence. Despite that, Q-learning and SARSA still find use in the RL community today.

In the next chapter, we will look at the use of deep neural networks in RL that gives rise to deep RL. We will see a variant of Q-learning called Deep Q-Networks (DQNs) that will use a neural network instead of a tabular state-action value function, which we saw in this chapter. Note that only problems with small number of states and actions are...

Further reading

  • Reinforcement Learning: an Introduction by Richard Sutton and Andrew Barto, 2018
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Explore efficient Reinforcement Learning algorithms and code them using TensorFlow and Python
  • Train Reinforcement Learning agents for problems, ranging from computer games to autonomous driving.
  • Formulate and devise selective algorithms and techniques in your applications in no time.

Description

Advances in reinforcement learning algorithms have made it possible to use them for optimal control in several different industrial applications. With this book, you will apply Reinforcement Learning to a range of problems, from computer games to autonomous driving. The book starts by introducing you to essential Reinforcement Learning concepts such as agents, environments, rewards, and advantage functions. You will also master the distinctions between on-policy and off-policy algorithms, as well as model-free and model-based algorithms. You will also learn about several Reinforcement Learning algorithms, such as SARSA, Deep Q-Networks (DQN), Deep Deterministic Policy Gradients (DDPG), Asynchronous Advantage Actor-Critic (A3C), Trust Region Policy Optimization (TRPO), and Proximal Policy Optimization (PPO). The book will also show you how to code these algorithms in TensorFlow and Python and apply them to solve computer games from OpenAI Gym. Finally, you will also learn how to train a car to drive autonomously in the Torcs racing car simulator. By the end of the book, you will be able to design, build, train, and evaluate feed-forward neural networks and convolutional neural networks. You will also have mastered coding state-of-the-art algorithms and also training agents for various control problems.

Who is this book for?

Data scientists and AI developers who wish to quickly get started with training effective reinforcement learning models in TensorFlow will find this book very useful. Prior knowledge of machine learning and deep learning concepts (as well as exposure to Python programming) will be useful.

What you will learn

  • Understand the theory and concepts behind modern Reinforcement Learning algorithms
  • Code state-of-the-art Reinforcement Learning algorithms with discrete or continuous actions
  • Develop Reinforcement Learning algorithms and apply them to training agents to play computer games
  • Explore DQN, DDQN, and Dueling architectures to play Atari s Breakout using TensorFlow
  • Use A3C to play CartPole and LunarLander
  • Train an agent to drive a car autonomously in a simulator
Estimated delivery fee Deliver to India

Premium delivery 5 - 8 business days

₹630.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 30, 2019
Length: 184 pages
Edition : 1st
Language : English
ISBN-13 : 9781789533583
Vendor :
Google
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to India

Premium delivery 5 - 8 business days

₹630.95
(Includes tracking information)

Product Details

Publication date : Mar 30, 2019
Length: 184 pages
Edition : 1st
Language : English
ISBN-13 : 9781789533583
Vendor :
Google
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
₹800 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
₹4500 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts
₹5000 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₹400 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 4,468.98
TensorFlow Reinforcement Learning Quick Start Guide
₹2010.99
TensorFlow 2.0 Quick Start Guide
₹2457.99
Total 4,468.98 Stars icon

Table of Contents

10 Chapters
Up and Running with Reinforcement Learning Chevron down icon Chevron up icon
Temporal Difference, SARSA, and Q-Learning Chevron down icon Chevron up icon
Deep Q-Network Chevron down icon Chevron up icon
Double DQN, Dueling Architectures, and Rainbow Chevron down icon Chevron up icon
Deep Deterministic Policy Gradient Chevron down icon Chevron up icon
Asynchronous Methods - A3C and A2C Chevron down icon Chevron up icon
Trust Region Policy Optimization and Proximal Policy Optimization Chevron down icon Chevron up icon
Deep RL Applied to Autonomous Driving Chevron down icon Chevron up icon
Assessment Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(2 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Colbert Philippe Nov 20, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a fantastic book for those starting in the field.
Amazon Verified review Amazon
Praveen Narayanan Jun 27, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book presents a readable, instructive overview of the latest RL methods for the beginning practitioner. It walks the reader through the subject with motivating examples and well chosen code to get their hands dirty.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela