In this chapter, we have seen how word2vec can be used to find semantics. The simple vectorization techniques help us a lot. We have seen some of the applications of it. We have touched upon the technicalities of the word2vec model. I have introduced lots of new mathematical, as well as statistical, terms to you in order to give you a better understanding of the model. We have converted the word2vec black box into the word2vec white box. I have also implemented basic as well as extended examples for better understanding. We have used a ton of libraries and APIs to develop word2vec models. We have also seen the advantages of having vectorization in deep learning. Then, we extended our word2vec understanding and developed the concepts of para2vec, doc2vec, and GloVe.
The next chapter will basically give you an in-depth idea of how rule-based techniques are used in order...