Implementing stanford NLP - lemmatization over Spark
Lemmatization is one of the pre-processing steps which is a more methodical way of converting all the grammatical/inflected forms of the root of the word. It uses context and parts of speech to determine the inflected form of the word and applies different normalization rules for each part of speech to get the word (lemma). In this recipe, we'll see lemmatization of text using Stanford API.
Getting ready
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. . Install Hadoop (optionally), Scala, and Java.
How to do it…
Let's see how to apply lemmatization using Stanford NLP over Spark:
Let's start an application named SparkCoreNLP. Initially specify the following libraries in
build.sbt
file:libraryDependencies...