Visualizing further
This section will describe how to squash the dimensionality of all the trained words and put it all into one giant matrix for visualization purposes. Since each word is a 300-dimensional vector, it needs to be brought down to a lower dimension for us to visualize it in a 2D space.
Getting ready
Once the model is saved and checkpointed after training, begin by loading it into memory, as you did in the previous section. The libraries and modules that will be utilized in this section are:
tSNE
pandas
Seaborn
numpy
How to do it...
The steps are as follows:
- Squash the dimensionality of the 300-dimensional word vectors by using the following command:
tsne = sklearn.manifold.TSNE(n_components=2, random_state=0)
- Put all the word vectors into one giant matrix (named
all_word_vectors_matrix
), and view it using the following commands:
all_word_vectors_matrix = got2vec.wv.syn0 print (all_word_vectors_matrix)
- Use the
tsne
technique to fit all the learned representations into a two- dimensional...