Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Machine Learning for Algorithmic Trading

You're reading from   Hands-On Machine Learning for Algorithmic Trading Design and implement investment strategies based on smart algorithms that learn from data using Python

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789346411
Length 684 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Jeffrey Yau Jeffrey Yau
Author Profile Icon Jeffrey Yau
Jeffrey Yau
Stefan Jansen Stefan Jansen
Author Profile Icon Stefan Jansen
Stefan Jansen
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Machine Learning for Trading 2. Market and Fundamental Data FREE CHAPTER 3. Alternative Data for Finance 4. Alpha Factor Research 5. Strategy Evaluation 6. The Machine Learning Process 7. Linear Models 8. Time Series Models 9. Bayesian Machine Learning 10. Decision Trees and Random Forests 11. Gradient Boosting Machines 12. Unsupervised Learning 13. Working with Text Data 14. Topic Modeling 15. Word Embeddings 16. Deep Learning 17. Convolutional Neural Networks 18. Recurrent Neural Networks 19. Autoencoders and Generative Adversarial Nets 20. Reinforcement Learning 21. Next Steps 22. Other Books You May Enjoy

Summary

In this chapter, we introduced a different class of ML problems, which focus on automating decisions by agents that interact with an environment. We covered the key features they are required to define an RL problem and various solution methods.

We saw how to frame and analyze an RL problem as a finite MDP, and how to compute a solution using value and policy iteration. We then moved on to more realistic situations where the transition probabilities and rewards are unknown to the agent, and saw how Q-learning builds on the key recursive relationship defined by the Bellman optimality equation in the MDP case. We saw how to solve RL problems using Python for simple MDPs and more complex environments with Q-learning.

Finally, we expanded our scope to continuous states and actions and applied the deep Q-learning algorithm to more the complex Lunar Lander environment.

...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime