Visually sifting through millions of words is impractical in data analysis because language includes many linking verbs that are repeated throughout the body of a text. Common words such as am, is, are, was, were, being, and been would be at the top of the most_common() list when you apply NLP against the source data even after it has been normalized. In the evolution of improving NLP libraries, a dictionary of stopwords was created to include a more comprehensive list of words that provide less value in text analytics. Example stopwords include linking verbs along with words such as the, an, a, and until. The goal is to create a subset of data that you can focus your analysis on after filtering out these stopwords from your token values.
NLP can require high CPU and RAM resources especially working with a large collection of words, so you may need to break up your data into logical chucks, such as alphabetically, to complete your analysis...