In text mining, a dataset is usually called a corpus. Each data sample in it is usually called a document. Documents are made of tokens, and a set of distinct tokens is called a vocabulary. Putting this information into a matrix is called vectorization. In the following sections, we are going to see the different kinds of vectorizations that we can get.
Vector space model
We still miss our beloved feature matrices, where we expect each token to have its own column and each document to be represented by a separate row. This kind of representation for textual data is known as the vectorspace model. From a linear-algebraic point of view, the documents in this representation are seen as vectors (rows), and the different terms are the dimensions of this space (columns), hence the name vector space model. In the next section, we will learn how to vectorize our documents.
Bag of words
We need to convert the documents into...