In this chapter, we introduced the basic ideas and models for using a neural network to learn distributed word representation. We particularly dove into the Word2Vec model and have shown how to train a model, as well as how to load the pre-trained vectors for downstream NLP applications. In the next chapter, we will talk about more advanced deep learning models in NLP, such as recurrent neural network, long-term short memory model, and several real-world applications.




















































