Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Deep Learning

You're reading from   Python Deep Learning Next generation techniques to revolutionize computer vision, AI, speech and data analysis

Arrow left icon
Product type Paperback
Published in Apr 2017
Publisher Packt
ISBN-13 9781786464453
Length 406 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (4):
Arrow left icon
Peter Roelants Peter Roelants
Author Profile Icon Peter Roelants
Peter Roelants
Daniel Slater Daniel Slater
Author Profile Icon Daniel Slater
Daniel Slater
Valentino Zocca Valentino Zocca
Author Profile Icon Valentino Zocca
Valentino Zocca
Gianmario Spacagna Gianmario Spacagna
Author Profile Icon Gianmario Spacagna
Gianmario Spacagna
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Machine Learning – An Introduction FREE CHAPTER 2. Neural Networks 3. Deep Learning Fundamentals 4. Unsupervised Feature Learning 5. Image Recognition 6. Recurrent Neural Networks and Language Models 7. Deep Learning for Board Games 8. Deep Learning for Computer Games 9. Anomaly Detection 10. Building a Production-Ready Intrusion Detection System Index

Quick recap on reinforcement learning


We first encountered reinforcement learning in Chapter 1, Machine Learning – An Introduction, when we looked at the three different types of learning processes: supervised, unsupervised, and reinforcement. In reinforcement learning, an agent receives rewards within an environment. For example, the agent might be a mouse in a maze and the reward might be some food somewhere in that maze. Reinforcement learning can sometimes feel a bit like a supervised recurrent network problem. A network is given a series of data and must learn a response.

The key distinction that makes a task a reinforcement learning problem is that the responses the agent gives changes the data it receives in future time steps. If the mouse turns left instead of right at a T section of the maze, it changes what its next state would be. In contrast, supervised recurrent networks simply predict a series. The predictions they make do not influence the future values in the series.

The AlphaGo...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime