Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Machine Learning for Algorithmic Trading

You're reading from   Hands-On Machine Learning for Algorithmic Trading Design and implement investment strategies based on smart algorithms that learn from data using Python

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781789346411
Length 684 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Jeffrey Yau Jeffrey Yau
Author Profile Icon Jeffrey Yau
Jeffrey Yau
Stefan Jansen Stefan Jansen
Author Profile Icon Stefan Jansen
Stefan Jansen
Arrow right icon
View More author details
Toc

Table of Contents (23) Chapters Close

Preface 1. Machine Learning for Trading FREE CHAPTER 2. Market and Fundamental Data 3. Alternative Data for Finance 4. Alpha Factor Research 5. Strategy Evaluation 6. The Machine Learning Process 7. Linear Models 8. Time Series Models 9. Bayesian Machine Learning 10. Decision Trees and Random Forests 11. Gradient Boosting Machines 12. Unsupervised Learning 13. Working with Text Data 14. Topic Modeling 15. Word Embeddings 16. Deep Learning 17. Convolutional Neural Networks 18. Recurrent Neural Networks 19. Autoencoders and Generative Adversarial Nets 20. Reinforcement Learning 21. Next Steps 22. Other Books You May Enjoy

Recurrent Neural Networks

In the last chapter, we covered the ability of Convolutional Neural Networks (CNNs) to learn feature representations from grid-like data. In this chapter, we introduce recurrent neural networks (RNNs), which are designed for processing sequential data.

Feedforward Neural Networks (FFNNs) treat the feature vectors for each sample as independent and identically distributed. As a result, they do not systematically take prior data points into account when evaluating the current observation. In other words, they have no memory.

One-dimensional convolutions, which we covered in the previous chapter, produce sequence elements that are a function of a small number of their neighbors. However, they only allow for shallow parameter-sharing by applying the same convolutional kernel to the relevant time steps.

The major innovation of the RNN model is that each output...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image