Summary
In this chapter, we covered the concept of transfer learning and how is it related to pre-trained networks. We utilized this knowledge by using the pre-trained deep learning networks VGG16
and ResNet50
to predict various images. We practiced how to take advantage of such pre-trained networks using techniques such as feature extraction and fine-tuning to train models faster and more accurately. Finally, we learned the powerful technique of tweaking existing models and making them work according to our dataset. This technique of building our own ANN
over an existing CNN
is one of the most powerful techniques used in the industry.
In the next chapter, we will learn about sequential modeling and sequential memory by looking at some real-life cases with Google Assistant. Furthermore, we will learn how sequential modeling is related to Recurrent Neural Networks
(RNN
). We will learn about the vanishing gradient problem in detail and how using an LSTM
is better than a simple RNN...