Building a bag-of-words model
When we deal with text documents that contain millions of words, we need to convert them into some kind of numeric representation. The reason for this is to make them usable for machine learning algorithms. These algorithms need numerical data so that they can analyze them and output meaningful information. This is where the bag-of-words approach comes into picture. This is basically a model that learns a vocabulary from all the words in all the documents. After this, it models each document by building a histogram of all the words in the document.
How to do it…
Create a new Python file, and import the following packages:
import numpy as np from nltk.corpus import brown from chunking import splitter
Let's define the
main
function. Load the input data from the Brown corpus:if __name__=='__main__': # Read the data from the Brown corpus data = ' '.join(brown.words()[:10000])
Divide the text data into five chunks:
# Number of words in each chunk num_words...