Implementing openNLP - chunker over Spark
Chunking is shallow parsing, where instead of retrieving deep structure of the sentence, we try to club some chunks of the sentences that constitute some meaning. A chunk is defined as the minimal unit that can be processed. The conventional pipeline in chunking is to tokenize the POS tag and the input string, before they are given to any chunker.
Getting ready
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Install Hadoop (optionally), Scala, and Java.
How to do it…
Let's see how to run OpenNLP-Chunker over Spark:
Let's start an application named SparkNLP. Initially specify the following libraries in the
build.sbt
file:libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.6.0",...