Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Reinforcement Learning for Games

You're reading from   Hands-On Reinforcement Learning for Games Implementing self-learning agents in games using artificial intelligence techniques

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher Packt
ISBN-13 9781839214936
Length 432 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Micheal Lanham Micheal Lanham
Author Profile Icon Micheal Lanham
Micheal Lanham
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Section 1: Exploring the Environment
2. Understanding Rewards-Based Learning FREE CHAPTER 3. Dynamic Programming and the Bellman Equation 4. Monte Carlo Methods 5. Temporal Difference Learning 6. Exploring SARSA 7. Section 2: Exploiting the Knowledge
8. Going Deep with DQN 9. Going Deeper with DDQN 10. Policy Gradient Methods 11. Optimizing for Continuous Control 12. All about Rainbow DQN 13. Exploiting ML-Agents 14. DRL Frameworks 15. Section 3: Reward Yourself
16. 3D Worlds 17. From DRL to AGI 18. Other Books You May Enjoy

Exploring Q-learning with contextual bandits

Now that we understand how to calculate values and the delicate balance of exploration and exploitation, we can move on to solving an entire MDP. As we will see, various solutions work better or worse depending on the RL problem and environment. That is actually the basis for the next several chapters. For now, though, we just want to introduce a method that is basic enough to solve the full RL problem. We describe the full RL problem as the non-stationary or contextual multi-armed bandit problem, that is, an agent that moves across a different bandit each episode and chooses a single arm from multiple arms. Each bandit now represents a different state and we no longer want to determine just the value of an action but the quality. We can calculate the quality of an action given a state using the Q-learning equation shown here:

In the preceding equation, we have the following:

  • : state
  • : current state
  • : next action
  • : current action
  • ϒ: gamma—reward discount
  • α: alphalearning rate
  • r: reward
  • : next reward
  • : quality

Now, don't get overly concerned if all of these terms are a little foreign and this equation appears overwhelming. This is the Q-learning equation developed by Chris Watkins in 1989 and is a method that simplifies the solving of a Finite Markov Decision Process or FMDP. The important thing to observe about the equation at this point is to understand the similarities it shares with the earlier action-value equation. In Chapter 2, Dynamic Programming and the Bellman Equation, we will learn in more detail how this equation is derived and functions. For now, the important concept to grasp is that we are now calculating a quality-based value on previous states and rewards based on actions rather than just a single action-value. This, in turn, allows our agent to make better planning for multiple states. We will implement a Q-learning agent that can play several multi-armed bandits and be able to maximize rewards in the next section.

Implementing a Q-learning agent

While that Q-learning equation may seem a lot more complex, actually implementing the equation is not unlike building our agent that just learned values earlier. To keep things simpler, we will use the same base of code but turn it into a Q-learning example. Open up the code example, Chapter_1_4.py, and follow the exercise here:

  1. Here is the full code listing for reference:
import random

arms = 7
bandits = 7
learning_rate = .1
gamma = .9
episodes = 10000

reward = []
for i in range(bandits):
reward.append([])
for j in range(arms):
reward[i].append(random.uniform(-1,1))
print(reward)

Q = []
for i in range(bandits):
Q.append([])
for j in range(arms):
Q[i].append(10.0)
print(Q)

def greedy(values):
return values.index(max(values))

def learn(state, action, reward, next_state):
q = gamma * max(Q[next_state])
q += reward
q -= Q[state][action]
q *= learning_rate
q += Q[state][action]
Q[state][action] = q

# agent learns
bandit = random.randint(0,bandits-1)
for i in range(0, episodes):
last_bandit = bandit
bandit = random.randint(0,bandits-1)
action = greedy(Q[bandit])
r = reward[last_bandit][action]
learn(last_bandit, action, r, bandit)
print(Q)
  1. All of the highlighted sections of code are new and worth paying closer attention to. Let's take a look at each section in more detail here:
arms = 7
bandits = 7
gamma = .9
  1. We start by initializing the arms variable to 7 then a new bandits variable to 7 as well. Recall that arms is analogous to actions and bandits likewise is to state. The last new variable, gamma, is a new learning parameter used to discount rewards. We will explore this discount factor concept in future chapters:
reward = []
for i in range(bandits):
reward.append([])
for j in range(arms):
reward[i].append(random.uniform(-1,1))
print(reward)
  1. The next section of code builds up the reward table matrix as a set of random values from -1 to 1. We use a list of lists in this example to better represent the separate concepts:
Q = []
for i in range(bandits):
Q.append([])
for j in range(arms):
Q[i].append(10.0)
print(Q)
  1. The following section is very similar and this time sets up a Q table matrix to hold our calculated quality values. Notice how we initialize our starting Q value to 10.0. We do this to account for subtle changes in the math, again something we will discuss later.
  2. Since our states and actions can be all mapped onto a matrix/table, we refer to our RL system as using a model. A model represents all actions and states of an environment:
def learn(state, action, reward, next_state):
q = gamma * max(Q[next_state])
q += reward
q -= Q[state][action]
q *= learning_rate
q += Q[state][action]
Q[state][action] = q
  1. We next define a new function called learn. This new function is just a straight implementation of the Q equation we observed earlier:
bandit = random.randint(0,bandits-1)
for i in range(0, episodes):
last_bandit = bandit
bandit = random.randint(0,bandits-1)
action = greedy(Q[bandit])
r = reward[last_bandit][action]
learn(last_bandit, action, r, bandit)
print(Q)
  1. Finally, the agent learning section is updated significantly with new code. This new code sets up the parameters we need for the new learn function we looked at earlier. Notice how the bandit or state is getting randomly selected each time. Essentially, this means our agent is just randomly walking from bandit to bandit.
  2. Run the code as you normally would and notice the new calculated Q values printed out at the end. Do they match the rewards for each of the arm pulls?

Likely, a few of your arms don't match up with their respective reward values. This is because the new Q-learning equation solves the entire MDP but our agent is NOT moving in an MDP. Instead, our agent is just randomly moving from state to state with no care on which state it saw before. Think back to our example and you will realize since our current state does not affect our future state, it fails to be a Markov property and hence is not an MDP. However, that doesn't mean we can't successfully solve this problem and we will look to do that in the next section.

Removing discounted rewards

The problem with our current solution and using the full Q-learning equation is that the equation assumes any state our agent is in affects future states. Except, remember in our example, the agent just walked randomly from bandit to bandit. This means using any previous state information would be useless, as we saw. Fortunately, we can easily fix this by removing the concept of discounted rewards. Recall that new variable, gamma, that appeared in this complicated term: . Gamma and this term are a way of discounting future rewards and something we will discuss at length starting in Chapter 2, Dynamic Programming and the Bellman Equation. For now, though, we can fix this sample up by just removing that term from our learn function. Let's open up code example, Chapter_1_5.py, and follow the exercise here:

  1. The only section of code we really need to focus on is the updated learn function, here:
def learn(state, action, reward, next_state):
#q = gamma * max(Q[next_state])
q = 0
q += reward
q -= Q[state][action]
q *= learning_rate
q += Q[state][action]
Q[state][action] = q
  1. The first line of code in the function is responsible for discounting the future reward of the next state. Since none of the states in our example are connected, we can just comment out that line. We create a new initializer for q = 0 in the next line.
  2. Run the code as you normally would. Now you should see very close values closely matching their respective rewards.

By omitting the discounted rewards part of the calculation, hopefully, you can appreciate that this would just revert to a value calculation problem. Alternatively, you may also realize that if our bandits were connected. That is, pulling an arm led to another one arm machine with more actions and so on. We could then use the Q-learning equation to solve the problem as well.

That concludes a very basic introduction to the primary components and elements of RL. Throughout the rest of this book, we will dig into the nuances of policies, values, actions, and rewards.

You have been reading a chapter from
Hands-On Reinforcement Learning for Games
Published in: Jan 2020
Publisher: Packt
ISBN-13: 9781839214936
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image