Word embeddings refer to the class of feature learning techniques in Natural Language Processing (NLP) that are used to generate a real valued vector representation of a word, sentence, or document.
Many machine learning tasks today involve text. For example, Google's language translation or spam detection in Gmail both use text as input to their models to perform the tasks of translation and spam detection. However, modern day computers can only take real valued numbers as input and can't understand strings or text unless we encode them into numbers or vectors.
For example, let's consider a sentence, "I like Football", for which we want a representation of all of the words. A brute force method to generate the embeddings of the three words "I", "like", and "Football" is done through the one hot...