Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Hands-On Q-Learning with Python
Hands-On Q-Learning with Python

Hands-On Q-Learning with Python: Practical Q-learning with OpenAI Gym, Keras, and TensorFlow

eBook
€8.99 €23.99
Paperback
€29.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Q-Learning with Python

Brushing Up on Reinforcement Learning Concepts

In this book, you will learn the fundamentals of Q-learning, a branch of reinforcement learning (RL), and how to apply them to challenging real-world optimization problems. You'll design software that dynamically writes itself, modifies itself, and improves its own performance in real time.

In doing so, you will build self-learning intelligent agents that start with no knowledge of how to solve a problem and independently find optimal solutions to that problem through observation, trial and error, and memory.

RL is one of the most exciting branches of artificial intelligence (AI) and powers some of its most visible successes, from recommendation systems that learn from user behavior to game-playing machines that can beat any human being at chess or Go.

Q-learning is one of the easiest versions of RL to get started with, and...

What is RL?

An RL agent is an optimization process that learns from experience, using data from its environment that it has collected through its own observations. It starts out knowing nothing about a task explicitly, learns by trial and error about what happens when it makes decisions, keeps track of successful decisions, and makes those same decisions under the same circumstances in the future.

In fields other than AI, RL is also referred to as dynamic programming. It takes much of its basic operating structure from behavioral psychology, and many of its mathematical constructs such as utility functions are taken from fields such as economics and game theory.

Let's get familiar with some key concepts in RL:

  • Agent: This is the decision-making entity.
  • Environment: This is the world in which the agent operates, such as a game to win or task to accomplish.
  • State: This...

States, actions, and rewards

What does it mean to be in a state, to take an action, or to receive a reward? These are the most important concepts for us to understand intuitively, so let's dig deeper into them. The following diagram depicts the agent-environment interaction in an MDP:

The agent interacts with the environment through actions, and it receives rewards and state information from the environment. In other words, the states and rewards are feedback from the environment, and the actions are inputs to the environment from the agent.

Going back to our simple driving simulator example, our agent might be moving or stopped at a red light, turning left or right, or heading straight. There might be other cars in the intersection, or there might not be. Our distance from the destination will be X units.

...

Key concepts in RL

Here, we'll go over some of the most important concepts that we'll need to bear in mind throughout our study of RL. We'll focus heavily on topics that are specific to Q-learning, but we'll also explore topics relating to other branches of RL, such as the related algorithm SARSA and policy-based RL algorithms.

Value-based versus policy-based iteration

We'll be using value-based iteration for the projects in this book. The description of the Bellman equation given previously offers a very high-level understanding of how value-based iteration works. The main difference is that in value-based iteration, the agent learns the expected reward value of each state-action pair, and in policy...

SARSA versus Q-learning – on-policy or off?

Similar to Q-learning, SARSA is a model-free RL method that does not explicitly learn the agent's policy function.

The primary difference between SARSA and Q-learning is that SARSA is an on-policy method while Q-learning is an off-policy method. The effective difference between the two algorithms happens in the step where the Q-table is updated. Let's discuss what that means with some examples:

Monte Carlo tree search (MCTS) is a type of model-based RL. We won't be discussing it in detail here, but it's useful to explore further as a contrast to model-free RL algorithms. Briefly, in model-based RL, we attempt to explicitly model a value function instead of relying on sampling and observation, so that we don't have to rely as much on trial and error in the learning process.

...

Summary

RL is one of the most exciting and fastest-growing branches of machine learning, with the greatest potential to create powerful optimization solutions to wide-ranging computing problems. As we have seen, Q-learning is one of the most accessible branches of RL and will provide a beginning RL practitioner and experienced programmer a strong foundation for developing solutions to both straightforward and complex optimization problems.

In the next chapter, we'll learn about Q-learning in detail, as well as about the learning agent that we'll be training to solve our Q-learning task. We'll discuss how Q-learning solves MDPs using a state-action model and how to apply that to our programming task.

Questions

  1. What is the difference between a reward and a value?
  2. What is a hyperparameter? Give an example of a hyperparameter other than the ones discussed in this chapter.
  3. Why will a Q-learning agent not choose the highest Q-valued action for its current state?
  4. Explain one benefit of a decaying gamma.
  5. Describe in one or two sentences the difference between the decision-making strategies of SARSA and Q-learning.
  6. What kind of policy does Q-learning implicitly assume the agent is following?
  7. Under what circumstances will SARSA and Q-learning produce the same results?
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand Q-learning algorithms to train neural networks using Markov Decision Process (MDP)
  • Study practical deep reinforcement learning using Q-Networks
  • Explore state-based unsupervised learning for machine learning models

Description

Q-learning is a machine learning algorithm used to solve optimization problems in artificial intelligence (AI). It is one of the most popular fields of study among AI researchers. This book starts off by introducing you to reinforcement learning and Q-learning, in addition to helping you become familiar with OpenAI Gym as well as libraries such as Keras and TensorFlow. A few chapters into the book, you will gain insights into model-free Q-learning and use deep Q-networks and double deep Q-networks to solve complex problems. This book will guide you in exploring use cases such as self-driving vehicles and OpenAI Gym’s CartPole problem. You will also learn how to tune and optimize Q-networks and their hyperparameters. As you progress, you will understand the reinforcement learning approach to solving real-world problems. You will also explore how to use Q-learning and related algorithms in scientific research. Toward the end, you’ll gain insight into what’s in store for reinforcement learning. By the end of this book, you will be equipped with the skills you need to solve reinforcement learning problems using Q-learning algorithms with OpenAI Gym, Keras, and TensorFlow.

Who is this book for?

If you are a machine learning developer, engineer, or professional who wants to explore the deep learning approach for a complex environment, then this is the book for you. Proficiency in Python programming and basic understanding of decision-making in reinforcement learning is assumed.

What you will learn

  • Explore the fundamentals of reinforcement learning and the state-action-reward process
  • Understand Markov Decision Processes
  • Get well-versed with libraries such as Keras, and TensorFlow
  • Create and deploy model-free learning and deep Q-learning agents with TensorFlow, Keras, and OpenAI Gym
  • Choose and optimize a Q-network's learning parameters and fine-tune its performance
  • Discover real-world applications and use cases of Q-learning

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 19, 2019
Length: 212 pages
Edition : 1st
Language : English
ISBN-13 : 9781789345803
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 19, 2019
Length: 212 pages
Edition : 1st
Language : English
ISBN-13 : 9781789345803
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 92.97
Python Reinforcement Learning
€37.99
Hands-On Deep Learning Architectures with Python
€24.99
Hands-On Q-Learning with Python
€29.99
Total 92.97 Stars icon
Banner background image

Table of Contents

13 Chapters
Section 1: Q-Learning: A Roadmap Chevron down icon Chevron up icon
Brushing Up on Reinforcement Learning Concepts Chevron down icon Chevron up icon
Getting Started with the Q-Learning Algorithm Chevron down icon Chevron up icon
Setting Up Your First Environment with OpenAI Gym Chevron down icon Chevron up icon
Teaching a Smartcab to Drive Using Q-Learning Chevron down icon Chevron up icon
Section 2: Building and Optimizing Q-Learning Agents Chevron down icon Chevron up icon
Building Q-Networks with TensorFlow Chevron down icon Chevron up icon
Digging Deeper into Deep Q-Networks with Keras and TensorFlow Chevron down icon Chevron up icon
Section 3: Advanced Q-Learning Challenges with Keras, TensorFlow, and OpenAI Gym Chevron down icon Chevron up icon
Decoupling Exploration and Exploitation in Multi-Armed Bandits Chevron down icon Chevron up icon
Further Q-Learning Research and Future Projects Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.3
(3 Ratings)
5 star 33.3%
4 star 0%
3 star 0%
2 star 0%
1 star 66.7%
SSV Jul 18, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I was sent a copy of this book by the publisher to read and review.If you are an intermediate level python user and if you are passionate about artificial intelligence and neural networks and are looking to improve your programming skills with Python, then this book is a must purchase!Author Ms. Nazia Habib has created an outstanding textbook that is perfect for self-directed learning. It first begins with an extremely thorough and easy to understand explanation of theoretical concepts surrounding reinforcement learning, and provides extensive information on the coding process with Q learning, using easy to follow examples as well as companion coding exercises to help you integrate your newfound knowledge as you progress through the book.As your skills progress throughout the book, more complex examples including neural networks are introduced, with applications being endless!So if you want to start building your expertise in programming for artificial intelligence, then this book is a must-read!
Amazon Verified review Amazon
Dr. Mark Potter May 15, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Limited in scope, not a great read.
Amazon Verified review Amazon
roman575 Jun 30, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Introduction, repetitions, conclusions, summaries, installation instructions comprise 90% of the book. Essential material is very basic and could be find in any 20 pages blog
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.