Chapter 1, Overview of Neural Networks, describes how to think intuitively about the fundamental nature, structure, and different forms of data. You will learn how to deal with basic data types, advanced data structures (image, video, audio, text, sensor, and multimedia data), and learn the underlying abstractions of extracting information from these varying data structures.
Chapter 2, A Deeper Dive into Neural Networks, takes an in-depth look at the mathematical background of neural networks. Then, you will explore how being a Keras user can make you more productive and outperform the competition, by means of rapid development cycles that iteratively improve your machine learning project outcomes.
Chapter 3, Signal Processing - Data Analysis with Neural Networks, covers how to become familiarized with the necessary types of transformations and normalizations essential to make neural networks work well with complete hands-on examples.
Chapter 4, Convolutional Neural Networks, provides an overview of different types of convolutional and pooling layers that may be used in neural networks used to process sensory input from images on your laptop to databases and real-time Internet of Things (IoT) applications. You will then gain information about processing pipelines associated with CNNs, and experiment with the latest object detection APIs and models available.
Chapter 5, Recurrent Neural Networks, focuses deeper on the theory behind different types of recurrent networks and what it means to be a Turing-complete algorithm.
Chapter 6, Long Short-Term Memory Networks, helps you to explore in detail a specific type of RNN known as LSTM networks, and understand yet another neural network architecture that was inspired by our own biology.
Chapter 7, Reinforcement Learning with Deep Q-Networks, begins with explaining the underlying architectures of reinforcement learning networks in detail and explains how to implement core and extended layers in Keras for desired outcomes
Chapter 8, Autoencoders, provides in-depth knowledge and ideas regarding the functioning of autoencoder neural networks.
Chapter 9, Generative Networks, addresses the use case of synthetic data generation and manipulation, commonly achieved through generative models like variational autoencoders and GANs.
Chapter 10, Contemplating Present and Future Developments, covers the topics of learning and transferring representations using neural networks. It also includes an overview of potential future developments to look out for in the field of AI, including paradigms like quantum computing.