What about all the other parameters? We could, for instance, tweak the number of clusters, or play with the vectorizer's max_features parameter (you should try that!). Also, we can play with different cluster center initializations. Then there are more exciting alternatives to K-means itself. There are, for example, clustering approaches that let you use different similarity measurements, such as Cosine similarity, Pearson, or Jaccard. An exciting field for you to play.
But before you go there, you will have to define what you actually mean by better. Scikit has a complete package dedicated only to this definition. The package is called sklearn.metrics and also contains a full range of different metrics to measure clustering quality. Maybe that should be the first place to go now—right into the sources of the metrics package.