We now know how to convert text strings to numerical vectors that capture some meaning. In this chapter, we will look at how to use those with embedding. Embedding is the more frequently used term for word vectors and numerical representations.
In this chapter, we are still following the broad outline from our first, that is, text→ representations → models→ evaluation → deployment.
We will continue working with text classification as our example task. This is mainly because it's a simple task for demonstration, but we can also extend almost all of the ideas in this book to solve other problems. The main focus ahead, however, is machine learning for text classification.
To sum up, in this chapter we will be looking at the following topics:
- Sentiment analysis as a specific class and example of text classification...