We made attempts to understand neural networks by looking into the working of a biological neuron and how a similar setup is imitated to build artificial neurons. We looked at the various components of neural networks, including neurons, layers, activation functions, and dropout, among other components. We attempted to answer how a signal flows through a neural network and how it learns. We discussed Keras, which conveniently helps us build our neural networks by providing high-level APIs. Finally, we applied our understanding to solve an NLP problem of classifying questions using an ANN so that the input to the network could comprise embeddings that were built using the TF–IDF vectorization technique.
Now that we have understood the architecture of ANNs and have seen the NLP applications that are based on it, let's take this forward and discuss the interaction of convolutional neural networks with text data in the next chapter.