In Chapter 8, Topic Models, we explored what topic models are, and how to set them up with both Gensim and scikit-learn. But just setting up a topic model isn't sufficient - a poorly trained topic model would not offer us any useful information.
We've already talked about the most important pretraining tip - preprocessing. It would be quite clear now that garbage in is garbage out, but sometimes even after ensuring it isn't garbage you're putting in, we still get nonsense outputs. In this section, we will briefly discuss what else it is you can do to polish your results.
It would be wise to re-look at Chapter 3, spaCy's Language Model, and Chapter 4, Gensim - Vectorizing Text and Transformations and n-grams, now - they introduce the methods used in preprocessing, which is usually the first advanced training tip given. It is worth...