In this chapter, we understood a specialized form of neural network, that is, CNNs, which help us capture spatial relationships and patterns in data. We looked at the various components involved in a CNN for encompassing convolutions, pooling, fully connected layers, and their functionality. We understood the way spatial relationships can exist in text and how can we extract them using CNNs. Finally, we applied all our understanding to solve a fairly complex problem regarding detecting sarcasm from text data using CNNs and pre-trained word embeddings from the Word2Vec algorithm.
In the next chapter, we will expand on the knowledge we gained in this chapter and look at another specialized form of neural network known as RNNs. We will look at the improvements we can make to the RNN architecture, which are suited for natural language data as they tend to capture temporal relationships in data.