Now, it is time to transform string representation into a numeric one. We adopt a bag-of-words approach; however, we use a trick called feature hashing. Let's look in more detail at how Spark employs this powerful technique to help us construct and access our tokenized dataset efficiently. We use feature hashing as a time-efficient implementation of a bag-of-words, as explained earlier.
At its core, feature hashing is a fast and space-efficient method to deal with high-dimensional data-typical in working with text-by converting arbitrary features into indices within a vector or matrix. This is best described with an example text. Suppose we have the following two movie reviews:
- The movie Goodfellas was well worth the money spent. Brilliant acting!
- Goodfellas is a riveting movie with a great cast and a brilliant plot-a must see for all...