The representation of input text as a bag of tokens is called BoW-based processing. The drawback of using BoW is that we discard most of the grammar and tokenization, which sometimes results in losing the context of the words. In the BoW approach, we first quantify the importance of each word in the context of each document that we want to analyze.
Fundamentally, there are three different ways of quantifying the importance of the words in the context of each document:
Binary: A feature will have a value of 1 if the word appears in the text or 0 otherwise.
Count: A feature will have the number of times the word appears in the text as its value or 0 otherwise.
Term frequency/Inverse document frequency: The value of the feature will be a ratio of how unique a word is in a single document to how unique it is in the entire corpus...