Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Machine Learning for Algorithmic Trading

You're reading from   Hands-On Machine Learning for Algorithmic Trading Design and implement investment strategies based on smart algorithms that learn from data using Python

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789346411
Length 684 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Jeffrey Yau Jeffrey Yau
Author Profile Icon Jeffrey Yau
Jeffrey Yau
Stefan Jansen Stefan Jansen
Author Profile Icon Stefan Jansen
Stefan Jansen
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Machine Learning for Trading 2. Market and Fundamental Data FREE CHAPTER 3. Alternative Data for Finance 4. Alpha Factor Research 5. Strategy Evaluation 6. The Machine Learning Process 7. Linear Models 8. Time Series Models 9. Bayesian Machine Learning 10. Decision Trees and Random Forests 11. Gradient Boosting Machines 12. Unsupervised Learning 13. Working with Text Data 14. Topic Modeling 15. Word Embeddings 16. Deep Learning 17. Convolutional Neural Networks 18. Recurrent Neural Networks 19. Autoencoders and Generative Adversarial Nets 20. Reinforcement Learning 21. Next Steps 22. Other Books You May Enjoy

Gradient Boosting Machines

In the previous chapter, we learned about how random forests improve the predictions made by individual decision trees by combining them into an ensemble that reduces the high variance of individual trees. Random forests use bagging, which is short for bootstrap aggregation, to introduce random elements into the process of growing individual trees.

More specifically, bagging draws samples from the data with replacement so that each tree is trained on a different but equal-sized random subset of the data (with some observations repeating). Random forests also randomly select a subset of the features so that both the rows and the columns of the data that are used to train each tree are random versions of the original data. The ensemble then generates predictions by averaging over the outputs of the individual trees.

Individual trees are usually grown deep...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime