The BoW models that we discussed in our earlier section suffer from a problem that they do not capture information about a word’s meaning or context. This means that potential relationships, such as contextual closeness, are not captured across collections of words. For example, the approach cannot capture simple relationships, such as determining that the words "cars" and "buses" both refer to vehicles that are often discussed in the context of transportation. This problem that we experience with the BoW approach will be overcome by word embedding, which is an improved approach to mapping semantically similar words.
Word vectors represent words as multidimensional continuous floating point numbers, where semantically similar words are mapped to proximate points in geometric space. For example, the words fruit and leaves...