So far, we've covered a number of basic neural network architectures and their learning algorithms. These are the necessary building blocks for designing networks that are capable of more advanced tasks, such as machine translation, speech recognition, time series prediction, and image segmentation. In this chapter, we'll cover a class of algorithms/architectures that excel at these and other tasks due to their ability to model sequential dependencies in the data.
These algorithms have proven to be incredibly powerful, and their variants have found wide application in industry and consumer use cases. This runs the gamut of machine translation, text generation, named entity recognition, and sensor data analysis. When you say Okay, Google! or Hey, Siri!, behind the scenes, a type of trained recurrent neural network (RNN...