Summary
In this chapter, we addressed the process of preparing data and discussed pipelines. We illustrated several techniques for extracting text from HTML, Word, and PDF documents.
We showed that a pipeline is nothing more than a sequence of tasks integrated to solve some problem. We can insert and remove various elements of the pipeline as needed. The Stanford pipeline architecture was discussed in detail. We examined the various annotators that can be used. The details of this pipeline were explored along with how it can be used with multiple processors.
We demonstrated how to construct a pipeline that creates and uses an index for text searches using OpenNLP. This provided an alternate way of creating a pipeline and allowed more variation in how a pipeline can be constructed in contrast to the Stanford pipeline.
We hope this has been a fruitful introduction to NLP processing using Java. We covered all of the significant NLP tasks and demonstrated several different approaches to support...