For natural language processing (NLP) applications, words need to be represented in numerical format as machines can only process numeric data. Representing words in arrays of numbers is known as "Word embedding", as each of these arrays (word representations) is a point in an n-dimensional space, where "n" is the number of dimensions/features (length of the array) that represents each word.
Depending on the type of NLP application, a machine learning (ML) algorithm is trained to learn the word representations. The typical length of the word representations can vary from 50 to 300 dimensions, which is impossible to visualize or comprehend. Using dimensionality reduction techniques such as PCA or t-SNE, this high-dimensional space is reduced to a two- or three-dimensional space so that it can be plotted on a graph to visualize...