Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Natural Language Processing with Java
Natural Language Processing with Java

Natural Language Processing with Java: Techniques for building machine learning and neural network models for NLP , Second Edition

eBook
€26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Natural Language Processing with Java

Introduction to NLP

Natural Language Processing (NLP) is a broad topic focused on the use of computers to analyze natural languages. It addresses areas such as speech processing, relationship extraction, document categorization, and summation of text. However, these types of analyses are based on a set of fundamental techniques, such as tokenization, sentence detection, classification, and extracting relationships. These basic techniques are the focus of this book. We will start with a detailed discussion of NLP, investigate why it is important, and identify application areas.

There are many tools available that support NLP tasks. We will focus on the Java language and how various Java Application Programmer Interfaces (APIs) support NLP. In this chapter, we will briefly identify the major APIs, including Apache's OpenNLP, Stanford NLP libraries, LingPipe, and GATE.

This is followed by a discussion of the basic NLP techniques illustrated in this book. The nature and use of these techniques is presented and illustrated using one of the NLP APIs. Many of these techniques will use models. Models are similar to a set of rules that are used to perform a task such as tokenizing text. They are typically represented by a class that is instantiated from a file. We'll round off the chapter with a brief discussion on how data can be prepared to support NLP tasks.

NLP is not easy. While some problems can be solved relatively easily, there are many others that require the use of sophisticated techniques. We will strive to provide a foundation for NLP-processing so that you will be able to better understand which techniques are available for and applicable to a given problem.

NLP is a large and complex field. In this book, we will only be able to address a small part of it. We will focus on core NLP tasks that can be implemented using Java. Throughout this book, we will demonstrate a number of NLP techniques using both the Java SE SDK and other libraries, such as OpenNLP and Stanford NLP. To use these libraries, there are specific API JAR files that need to be associated with the project in which they are being used. A discussion of these libraries is found in the Survey of NLP tools section and contains download links to the libraries. The examples in this book were developed using NetBeans 8.0.2. These projects require the API JAR files to be added to the Libraries category of the Projects Properties dialog box.

In this chapter, we will learn about the following topics:

  • What is NLP?
  • Why use NLP?
  • Why is NLP so hard?
  • Survey of NLP tools
  • Deep learning for Java
  • Overview of text-processing tasks
  • Understanding NLP models
  • Preparing data

What is NLP?

A formal definition of NLP frequently includes wording to the effect that it is a field of study using computer science, Artificial Intelligence (AI), and formal linguistics concepts to analyze natural language. A less formal definition suggests that it is a set of tools used to derive meaningful and useful information from natural language sources, such as web pages and text documents.

Meaningful and useful implies that it has some commercial value, though it is frequently used for academic problems. This can readily be seen in its support of search engines. A user query is processed using NLP techniques in order to generate a result page that a user can use. Modern search engines have been very successful in this regard. NLP techniques have also found use in automated help systems and in support of complex query systems, as typified by IBM's Watson project.

When we work with a language, the terms syntax and semantics are frequently encountered. The syntax of a language refers to the rules that control a valid sentence structure. For example, a common sentence structure in English starts with a subject followed by a verb and then an object, such as "Tim hit the ball." We are not used to unusual sentence orders, such as "Hit ball Tim." Although the rule of syntax for English is not as rigorous as that for computer languages, we still expect a sentence to follow basic syntax rules.

The semantics of a sentence is its meaning. As English speakers, we understand the meaning of the sentence, "Tim hit the ball." However, English, and other natural languages, can be ambiguous at times and a sentence's meaning may only be determined from its context. As we will see, various machine learning techniques can be used to attempt to derive the meaning of a text.

As we progress with our discussions, we will introduce many linguistic terms that will help us better understand natural languages and provide us with a common vocabulary to explain the various NLP techniques. We will see how the text can be split into individual elements and how these elements can be classified.

In general, these approaches are used to enhance applications, thus making them more valuable to their users. The uses of NLP can range from relatively simple uses to those that are pushing what is possible today. In this book, we will show examples that illustrate simple approaches, which may be all that is required for some problems, to the more advanced libraries and classes available to address sophisticated needs.

Why use NLP?

NLP is used in a wide variety of disciplines to solve many different types of problems. Text analysis is performed on text that ranges from a few words of user input for an internet query to multiple documents that need to be summarized. We have seen a large growth in the amount and availability of unstructured data in recent years. This has taken forms such as blogs, tweets, and various other social media. NLP is ideal for analyzing this type of information.

Machine learning and text analysis are used frequently to enhance an application's utility. A brief list of application areas follow:

  • Searching: This identifies specific elements of text. It can be as simple as finding the occurrence of a name in a document or might involve the use of synonyms and alternate spellings/misspellings to find entries that are close to the original search string.
  • Machine translation: This typically involves the translation of one natural language into another.
  • Summation: Paragraphs, articles, documents, or collections of documents may need to be summarized. NLP has been used successfully for this purpose.
  • Named-Entity Recognition (NER): This involves extracting names of locations, people, and things from text. Typically, this is used in conjunction with other NLP tasks, such as processing queries.
  • Information grouping: This is an important activity that takes textual data and creates a set of categories that reflect the content of the document. You have probably encountered numerous websites that organize data based on your needs and have categories listed on the left-hand side of the website.
  • Parts-of-Speech tagging (POS): In this task, text is split up into different grammatical elements, such as nouns and verbs. This is useful for analyzing the text further.
  • Sentiment analysis: People's feelings and attitudes regarding movies, books, and other products can be determined using this technique. This is useful in providing automated feedback with regards to how well a product is perceived.
  • Answering queries: This type of processing was illustrated when IBM's Watson successfully won a Jeopardy competition. However, its use is not restricted to winning gameshows and has been used in a number of other fields, including medicine.
  • Speech-recognition: Human speech is difficult to analyze. Many of the advances that have been made in this field are the result of NLP efforts.
  • Natural-Language Generation (NLG): This is the process of generating text from a data or knowledge source, such as a database. It can automate the reporting of information, such as weather reports, or summarize medical reports.

NLP tasks frequently use different machine learning techniques. A common approach starts with training a model to perform a task, verifying that the model is correct, and then applying the model to a problem. We will examine this process further in the Understanding NLP models section.

Why is NLP so hard?

NLP is not easy. There are several factors that make this process hard. For example, there are hundreds of natural languages, each of which has different syntax rules. Words can be ambiguous where their meaning is dependent on their context. Here, we will examine a few of the more significant problem areas.

At the character level, there are several factors that need to be considered. For example, the encoding scheme used for a document needs to be considered. Text can be encoded using schemes such as ASCII, UTF-8, UTF-16, or Latin-1. Other factors, such as whether the text should be treated as case-sensitive or not, may need to be considered. Punctuation and numbers may require special processing. We sometimes need to consider the use of emoticons (character combinations and special character images), hyperlinks, repeated punctuation (... or ---), file extensions, and usernames with embedded periods. Many of these are handled by preprocessing text, as we will discuss in the Preparing data section.

When we tokenize text, it usually means we are breaking up the text into a sequence of words. These words are called tokens. The process is referred to as tokenization. When a language uses whitespace characters to delineate words, this process is not too difficult. With a language such as Chinese, it can be quite difficult since it uses unique symbols for words.

Words and morphemes may need to be assigned a Part-of-Speech (POS) label, identifying what type of unit it is. A morpheme is the smallest division of text that has meaning. Prefixes and suffixes are examples of morphemes. Often, we need to consider synonyms, abbreviation, acronyms, and spellings when we work with words.

Stemming is another task that may need to be applied. Stemming is the process of finding the word stem of a word. For example, words such as walking, walked, or walks have the word stem walk. Search engines often use stemming to assist in asking a query.

Closely related to stemming is the process of lemmatization. This process determines the base form of a word, called its lemma. For example, for the word operating, its stem is oper but its lemma is operate. Lemmatization is a more refined process than stemming, and uses vocabulary and morphological techniques to find a lemma. This can result in more precise analysis in some situations.

Words are combined into phrases and sentences. Sentence detection can be problematic and is not as simple as looking for the periods at the end of a sentence. Periods are found in many places, including abbreviations such as Ms., and in numbers such as 12.834.

We often need to understand which words in a sentence are nouns and which are verbs. We are often concerned with the relationship between words. For example, coreferences resolution determines the relationship between certain words in one or more sentences. Consider the following sentence:

"The city is large but beautiful. It fills the entire valley."

The word it is the coreference to city. When a word has multiple meanings, we might need to perform word-sense disambiguation (WSD) to determine the intended meaning. This can be difficult to do at times. For example, "John went back home." Does the home refer to a house, a city, or some other unit? Its meaning can sometimes be inferred from the context in which it is used. For example, "John went back home. It was situated at the end of a cul-de-sac."

Despite these difficulties, NLP is able to perform these tasks reasonably well in most situations and provide added value to many problem domains. For example, sentiment analysis can be performed on customer tweets, resulting in possible free product offers for dissatisfied customers. Medical documents can be readily summarized to highlight the relevant topics and improved productivity.

Summarization is the process of producing a short description of different units. These units can include multiple sentences, paragraphs, a document, or multiple documents. The intent may be to identify those sentences that convey the meaning of the unit, determine the prerequisites for understanding a unit, or to find items within these units. Frequently, the context of the text is important in accomplishing this task.

Survey of NLP tools

There are many tools available that support NLP. Some of these are available with the Java SE SDK but are limited in their utility for all but the simplest types of problems. Other libraries, such as Apache's OpenNLP and LingPipe, provide extensive and sophisticated support for NLP problems.

Low-level Java support includes string libraries, such as String, StringBuilder, and StringBuffer. These classes possess methods that perform searching, matching, and text-replacement. Regular expressions use special encoding to match substrings. Java provides a rich set of techniques to use regular expressions.

As discussed earlier, tokenizers are used to split text into individual elements. Java provides supports for tokenizers with:

  • The String class' split method
  • The StreamTokenizer class
  • The StringTokenizer class

There also exist a number of NLP libraries/APIs for Java. A partial list of Java-based NLP APIs can be found in the following table. Most of these are open source. In addition, there are a number of commercial APIs available. We will focus on the open source APIs:

API

URL

Apertium

http://www.apertium.org/

General Architecture for Text Engineering

http://gate.ac.uk/

Learning Based Java

https://github.com/CogComp/lbjava

LingPipe

http://alias-i.com/lingpipe/

MALLET

http://mallet.cs.umass.edu/

MontyLingua

http://web.media.mit.edu/~hugo/montylingua/

Apache OpenNLP

http://opennlp.apache.org/

UIMA

http://uima.apache.org/

Stanford Parser

http://nlp.stanford.edu/software

Apache Lucene Core

https://lucene.apache.org/core/

Snowball

http://snowballstem.org/

 

Many of these NLP tasks are combined to form a pipeline. A pipeline consists of various NLP tasks, which are integrated into a series of steps to achieve a processing goal. Examples of frameworks that support pipelines are General Architecture for Text Engineering (GATE) and Apache UIMA.

In the next section, we will cover several NLP APIs in more depth. A brief overview of their capabilities will be presented along with a list of useful links for each API.

Apache OpenNLP

The Apache OpenNLP project is a machine-learning-based tool kit for processing natural-language text; it addresses common NLP tasks and will be used throughout this book. It consists of several components that perform specific tasks, permit models to be trained, and support for testing the models. The general approach, used by OpenNLP, is to instantiate a model that supports the task from a file and then executes methods against the model to perform a task.

For example, in the following sequence, we will tokenize a simple string. For this code to execute properly, it must handle the FileNotFoundException and IOException exceptions. We use a try-with-resource block to open a FileInputStream instance using the en-token.bin file. This file contains a model that has been trained using English text:

try (InputStream is = new FileInputStream( 
        new File(getModelDir(), "en-token.bin"))){ 
    // Insert code to tokenize the text 
} catch (FileNotFoundException ex) { 
    ... 
} catch (IOException ex) { 
    ... 
} 

An instance of the TokenizerModel class is then created using this file inside the try block. Next, we create an instance of the Tokenizer class, as shown here:

TokenizerModel model = new TokenizerModel(is); 
Tokenizer tokenizer = new TokenizerME(model); 

The tokenize method is then applied, whose argument is the text to be tokenized. The method returns an array of String objects:

String tokens[] = tokenizer.tokenize("He lives at 1511 W." 
+ "Randolph.");

A for-each statement displays the tokens, as shown here. The open and closed brackets are used to clearly identify the tokens:

for (String a : tokens) { 
  System.out.print("[" + a + "] "); 
} 
System.out.println(); 

When we execute this, we will get the following output:

[He] [lives] [at] [1511] [W.] [Randolph] [.]  

In this case, the tokenizer recognized that W. was an abbreviation and that the last period was a separate token demarking the end of the sentence.

We will use the OpenNLP API for many of the examples in this book. OpenNLP links are listed in the following table:

OpenNLP

Website

Home



https://opennlp.apache.org/

Documentation


https://opennlp.apache.org/docs/

Javadoc



http://nlp.stanford.edu/nlp/javadoc/javanlp/index.html

Download



https://opennlp.apache.org/cgi-bin/download.cgi

Wiki



https://cwiki.apache.org/confluence/display/OPENNLP/Index%3bjsessionid=32B408C73729ACCCDD071D9EC354FC54

Stanford NLP

The Stanford NLP Group conducts NLP research and provides tools for NLP tasks. The Stanford CoreNLP is one of these toolsets. In addition, there are other toolsets, such as the Stanford Parser, Stanford POS tagger, and the Stanford Classifier. The Stanford tools support English and Chinese languages and basic NLP tasks, including tokenization and name-entity recognition.

These tools are released under the full GPL, but it does not allow them to be used in commercial applications, though a commercial license is available. The API is well-organized and supports the core NLP functionality.

There are several tokenization approaches supported by the Stanford group. We will use the PTBTokenizer class to illustrate the use of this NLP library. The constructor demonstrated here uses a Reader object, a LexedTokenFactory<T> argument, and a string to specify which of the several options is to be used.

LexedTokenFactory is an interface that is implemented by the CoreLabelTokenFactory and WordTokenFactory classes. The former class supports the retention of the beginning and ending character positions of a token, whereas the latter class simply returns a token as a string without any positional information. The WordTokenFactory class is used by default.

The CoreLabelTokenFactory class is used in the following example. A StringReader is created using a string. The last argument is used for the option parameter, which is null for this example. The Iterator interface is implemented by the PTBTokenizer class, allowing us to use the hasNext and next methods to display the tokens:

PTBTokenizer ptb = new PTBTokenizer( 
new StringReader("He lives at 1511 W. Randolph."), 
new CoreLabelTokenFactory(), null); 
while (ptb.hasNext()) { 
  System.out.println(ptb.next()); 
} 

The output is as follows:

He
lives
at
1511
W.
Randolph
.  

We will use the Stanford NLP library extensively in this book. A list of Stanford links is found in the following table. Documentation and download links are found in each of the distributions:

Stanford NLP

Website

Home



http://nlp.stanford.edu/index.shtml

CoreNLP



http://nlp.stanford.edu/software/corenlp.shtml#Download

Parser



http://nlp.stanford.edu/software/lex-parser.shtml

POS Tagger



http://nlp.stanford.edu/software/tagger.shtml

java-nlp-user mailing list



https://mailman.stanford.edu/mailman/listinfo/java-nlp-user

LingPipe

LingPipe consists of a set of tools to perform common NLP tasks. It supports model training and testing. There are both royalty-free and licensed versions of the tool. The production use of the free version is limited.

To demonstrate the use of LingPipe, we will illustrate how it can be used to tokenize text using the Tokenizer class. Start by declaring two lists, one to hold the tokens and a second to hold the whitespace:

List<String> tokenList = new ArrayList<>(); 
List<String> whiteList = new ArrayList<>(); 
You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files emailed directly to you.

Next, declare a string to hold the text to be tokenized:

String text = "A sample sentence processed \nby \tthe " + 
    "LingPipe tokenizer."; 

Now, create an instance of the Tokenizer class. As shown in the following code block, a static tokenizer method is used to create an instance of the Tokenizer class based on an Indo-European factory class:

Tokenizer tokenizer = IndoEuropeanTokenizerFactory.INSTANCE. 
tokenizer(text.toCharArray(), 0, text.length()); 

The tokenize method of this class is then used to populate the two lists:

tokenizer.tokenize(tokenList, whiteList); 

Use a for-each statement to display the tokens:

for(String element : tokenList) { 
  System.out.print(element + " "); 
} 
System.out.println(); 

The output of this example is shown here:

A sample sentence processed by the LingPipe tokenizer

A list of LingPipe links can be found in the following table:

LingPipe

Website

Home



http://alias-i.com/lingpipe/index.html

Tutorials



http://alias-i.com/lingpipe/demos/tutorial/read-me.html

JavaDocs



http://alias-i.com/lingpipe/docs/api/index.html

Download



http://alias-i.com/lingpipe/web/install.html

Core



http://alias-i.com/lingpipe/web/download.html

Models



http://alias-i.com/lingpipe/web/models.html

GATE

GATE is a set of tools written in Java and developed at the University of Sheffield in England. It supports many NLP tasks and languages. It can also be used as a pipeline for NLP-processing. It supports an API along with GATE Developer, a document viewer that displays text along with annotations. This is useful for examining a document using highlighted annotations. GATE Mimir, a tool for indexing and searching text generated by various sources, is also available. Using GATE for many NLP tasks involves a bit of code. GATE Embedded is used to embed GATE functionality directly in the code. Useful GATE links are listed in the following table:

Gate

Website

Home



https://gate.ac.uk/

Documentation



https://gate.ac.uk/documentation.html

JavaDocs



http://jenkins.gate.ac.uk/job/GATE-Nightly/javadoc/

Download



https://gate.ac.uk/download/

Wiki



http://gatewiki.sf.net/

 

TwitIE is an open source GATE pipeline for information-extraction over tweets. It contains the following:

  • Social media data-language identification
  • Twitter tokenizer for handling smileys, username, URLs, and so on
  • POS tagger
  • Text-normalization

It is available as part of the GATE Twitter plugin. The following table lists the required links:

TwitIE Website
Home

https://gate.ac.uk/wiki/twitie.html

Documentation

 

https://gate.ac.uk/sale/ranlp2013/twitie/twitie-ranlp2013.pdf?m=1

UIMA

The Organization for the Advancement of Structured Information Standards (OASIS) is a consortium focused on information-oriented business technologies. It developed the Unstructured Information Management Architecture (UIMA) standard as a framework for NLP pipelines. It is supported by Apache UIMA.

Although it supports pipeline creation, it also describes a series of design patterns, data representations, and user roles for the analysis of text. UIMA links are listed in the following table:

Apache UIMA

Website

Home



https://uima.apache.org/

Documentation



https://uima.apache.org/documentation.html

JavaDocs



https://uima.apache.org/d/uimaj-2.6.0/apidocs/index.html

Download



https://uima.apache.org/downloads.cgi

Wiki



https://cwiki.apache.org/confluence/display/UIMA/Index

Apache Lucene Core

Apache Lucene Core is an open source library for full-featured text search engines written in Java. It uses tokenization for breaking text into small chunks for indexing elements. It also provide pre- and post-tokenization options for analysis purposes. It supports stemming, filtering, text-normalization, and synonym-expansion after tokenization. When used, it creates a directory and index files, and can be used to search the contents. It cannot be taken as an NLP toolkit, but it provides powerful tools for working with text and advanced string-manipulation with tokenization. It provides a free search engine. The following table list the important links for Apache Lucene:

Apache Lucene

Website

Home

http://lucene.apache.org/

Documentation

http://lucene.apache.org/core/documentation.html

JavaDocs

http://lucene.apache.org/core/7_3_0/core/index.html

Download

http://lucene.apache.org/core/mirrors-core-latest-redir.html?

Deep learning for Java

Deep learning is a part of machine learning that is a subset of AI. Deep learning is inspired by the functioning of the human brain in its biological form. It uses terms such as neurons in creating neural networks, which can be part of supervised or unsupervised learning. Deep learning concepts are widely applied in fields of computer vision, speech recognition, NLP, social network analysis and filtering, fraud detection, predictions, and so on. Deep learning proved itself in the field of image processing in 2010 when it outperformed all others in an image net competition, and now it has started to show promising results in NLP. Some of the areas where deep learning has performed very well include Named Entity Recognition (NER), sentiment analysis, POS tagging, machine translation, text-classification, caption-generation, and question-answering.

This excellent read can be found in Goldbergs work at https://arxiv.org/abs/1510.00726. There are various tools and libraries available for deep learning. The following is a list of libraries to get you started:

  • Deeplearning4J (https://deeplearning4j.org/): It is an open source, distributed, deep learning library for JVM.
  • Weka (https://www.cs.waikato.ac.nz/ml/weka/index.html): It is known as a data-mining software in Java and has a collection of machine learning algorithms that support preprocessing, prediction, regression, clustering, association rules, and visualization.
  • Massive Online Analysis (MOA) (https://moa.cms.waikato.ac.nz/): Used on realtime streams. Supports machine learning and data mining.
  • Environment for Developing KDD-Applications Supported by Index Structures (ELKI) (https://elki-project.github.io/): It is a data-mining software that focuses on research algorithms, with an emphasis on unsupervised methods in cluster-analysis and outlier-detection.
  • Neuroph (http://neuroph.sourceforge.net/index.html): It is a lightweight Java neural network framework used to develop neural network architectures licensed under Apache Licensee 2.0. It also supports GUI tools for creating and training data sets.
  • Aerosolve (http://airbnb.io/aerosolve/): It is a machine learning package for humans, as seen on the web. It is developed by Airbnb and is more inclined toward machine learning.

You can find approximately 366 repositories on GitHub (https://github.com/search?l=Java&amp;q=deep+learning&amp;type=Repositories&amp;utf8=%E2%9C%93) for deep learning and Java.

Overview of text-processing tasks

Although there are numerous NLP tasks that can be performed, we will focus only on a subset of these tasks. A brief overview of these tasks is presented here, which is also reflected in the following chapters:

Many of these tasks are used together with other tasks to achieve an objective. We will see this as we progress through the book. For example, tokenization is frequently used as an initial step in many of the other tasks. It is a fundamental and basic step.

Finding parts of text

Text can be decomposed into a number of different types of elements, such as words, sentences, and paragraphs. There are several ways of classifying these elements. When we refer to parts of text in this book, we are referring to words, sometimes called tokens. Morphology is the study of the structure of words. We will use a number of morphology terms in our exploration of NLP. However, there are many ways to classify words, including the following:

  • Simple words: These are the common connotations of what a word means, including the 17 words in this sentence.
  • Morphemes: This are the smallest unit of a word that is meaningful. For example, in the word bounded, bound is considered to be a morpheme. Morphemes also include parts such as the suffix, ed.
  • Prefix/suffix: This precedes or follows the root of a word. For example, in the word graduation, the ation is a suffix based on the word graduate.
  • Synonyms: This is a word that has the same meaning as another word. Words such as small and tiny can be recognized as synonyms. Addressing this issue requires word-sense disambiguation.
  • Abbreviations: These shorten the use of a word. Instead of using Mister Smith, we use Mr. Smith.
  • Acronyms: These are used extensively in many fields, including computer science. They use a combination of letters for phrases such as FORmula TRANslation for FORTRAN. They can be recursive, such as GNU. Of course, the one we will continue to use is NLP.
  • Contractions: We'll find these useful for commonly used combinations of words, such as the first word of this sentence.
  • Numbers: A specialized word that normally uses only digits. However, more complex versions can include a period and a special character to reflect scientific notation or numbers of a specific base.

Identifying these parts is useful for other NLP tasks. For example, to determine the boundaries of a sentence, it is necessary to break it apart and determine which elements terminate a sentence.

The process of breaking text apart is called tokenization. The result is a stream of tokens. The elements of the text that determine where elements should be split are called delimiters. For most English text, whitespace is used as a delimiter. This type of a delimiter typically includes blanks, tabs, and new line characters.

Tokenization can be simple or complex. Here, we will demonstrate a simple tokenization using the String class' split method. First, declare a string to hold the text that is to be tokenized:

String text = "Mr. Smith went to 123 Washington avenue."; 

The split method uses a regular expression argument to specify how the text should be split. In the following code sequence, its argument is the \\s+ string. This specifies that one or more whitespaces will be used as the delimiter:

String tokens[] = text.split("\\s+"); 

A for-each statement is used to display the resulting tokens:

for(String token : tokens) { 
  System.out.println(token); 
} 

When executed, the output will appear as shown here:

Mr.
Smith
went
to
123
Washington
avenue.  

In Chapter 2, Finding Parts of Text, we will explore the tokenization process in depth.

Finding sentences

We tend to think of the process of identifying sentences as simple. In English, we look for termination characters, such as a period, question mark, or exclamation mark. However, as we will see in Chapter 3, Finding Sentences, this is not always that simple. Factors that make it more difficult to find the end of sentences include the use of embedded periods in such phrases as Dr. Smith or 204 SW. Park Street.

This process is also called sentence boundary disambiguation (SBD). This is a more significant problem in English than it is in languages such as Chinese or Japanese, which have unambiguous sentence delimiters.

Identifying sentences is useful for a number of reasons. Some NLP tasks, such as POS tagging and entity-extraction, work on individual sentences. Question-answering applications also need to identify individual sentences. For these processes to work correctly, sentence boundaries must be determined correctly.

The following example demonstrates how sentences can be found using the Stanford DocumentPreprocessor class. This class will generate a list of sentences based on either simple text or an XML document. The class implements the Iterable interface, allowing it to be easily used in a for-each statement.

Start by declaring a string containing the following sentences:

String paragraph = "The first sentence. The second sentence."; 

Create a StringReader object based on the string. This class supports simple read type methods and is used as the argument of the DocumentPreprocessor constructor:

Reader reader = new StringReader(paragraph); 
DocumentPreprocessor documentPreprocessor =  
new DocumentPreprocessor(reader); 

The DocumentPreprocessor object will now hold the sentences of the paragraph. In the following statement, a list of strings is created and is used to hold the sentences found:

List<String> sentenceList = new LinkedList<String>(); 

Each element of the documentPreprocessor object is then processed and consists of a list of the HasWord objects, as shown in the following block of code. The HasWord elements are objects that represent a word. An instance of StringBuilder is used to construct the sentence with each element of the hasWordList element being added to the list. When the sentence has been built, it is added to the sentenceList list:

for (List<HasWord> element : documentPreprocessor) { 
  StringBuilder sentence = new StringBuilder(); 
  List<HasWord> hasWordList = element; 
  for (HasWord token : hasWordList) { 
      sentence.append(token).append(" "); 
  } 
  sentenceList.add(sentence.toString()); 
} 

A for-each statement is then used to display the sentences:

for (String sentence : sentenceList) { 
  System.out.println(sentence); 
} 

The output will appear as shown here:

The first sentence . 
The second sentence .   

The SBD process is covered in depth in Chapter 3, Finding Sentences.

Feature-engineering

Feature-engineering plays an essential role in developing NLP applications; it is very important for machine learning, especially in prediction-based models. It is the process of transferring the raw data into features, using domain knowledge, so that machine learning algorithms work. Features give us a more focused view of the raw data. Once the features are identified, feature-selection is done to reduce the dimension of data. When raw data is processed, the patterns or features are detected, but it may not be enough to enhance the training dataset. Engineered features enhance training by providing relevant information that helps in differentiating the patterns in the data. The new feature may not be captured or apparent in original dataset or extracted features. Hence, feature-engineering is an art and requires domain expertise. It is still a human craft, something machines are not yet good at.

Chapter 6, Representing Text with Features, will show how text documents can be presented as traditional features that do not work on text documents.

Finding people and things

Search engines do a pretty good job of meeting the needs of most users. People frequently use search engines to find the address of a business or movie showtimes. A word-processor can perform a simple search to locate a specific word or phrase in a text. However, this task can get more complicated when we need to consider other factors, such as whether synonyms should be used or whether we are interested in finding things closely related to a topic.

For example, let's say we visit a website because we are interested in buying a new laptop. After all, who doesn't need a new laptop? When you go to the site, a search engine will be used to find laptops that possess the features you are looking for. The search is frequently conducted based on a previous analysis of vendor information. This analysis often requires text to be processed in order to derive useful information that can eventually be presented to a customer.

The presentation may be in the form of facets. These are normally displayed on the left-hand side of a web page. For example, the facets for laptops might include categories such as Ultrabook, Chromebook, or Hard Disk Size. This is illustrated in the following screenshot, which is part of an Amazon web page:

Some searches can be very simple. For example, the String class and related classes have methods, such as the indexOf and lastIndexOf methods, that can find the occurrence of a String class. In the simple example that follows, the index of the occurrence of the target string is returned by the indexOf method:

String text = "Mr. Smith went to 123 Washington avenue."; 
String target = "Washington"; 
int index = text.indexOf(target); 
System.out.println(index); 

The output of this sequence is shown here:

22

This approach is useful for only the simplest problems.

When text is searched, a common technique is to use a data structure called an inverted index. This process involves tokenizing the text and identifying terms of interest in the text along with their position. The terms and their positions are then stored in the inverted index. When a search is made for the term, it is looked up in the inverted index and the positional information is retrieved. This is faster than searching for the term in the document each time it is needed. This data structure is used frequently in databases, information-retrieval systems, and search engines.

More sophisticated searches might involve responding to queries such as: "What are some good restaurants in Boston?" To answer this query, we might need to perform entity-recognition/resolution to identify the significant terms in the query, perform semantic analysis to determine the meaning of the query, search, and then rank the candidate responses.

To illustrate the process of finding names, we use a combination of a tokenizer and the OpenNLP TokenNameFinderModel class to find names in a text. Since this technique may throw IOException, we will use a try...catch block to handle it. Declare this block and an array of strings holding the sentences, as shown here:

try { 
    String[] sentences = { 
"Tim was a good neighbor. Perhaps not as good a Bob " + "Haywood, but still pretty good. Of course Mr. Adam " + "took the cake!"}; // Insert code to find the names here } catch (IOException ex) { ex.printStackTrace(); }

Before the sentences can be processed, we need to tokenize the text. Set up the tokenizer using the Tokenizer class, as shown here:

Tokenizer tokenizer = SimpleTokenizer.INSTANCE; 

We will need to use a model to detect sentences. This is needed to avoid grouping terms that may span sentence boundaries. We will use the TokenNameFinderModel class based on the model found in the en-ner-person.bin file. An instance of TokenNameFinderModel is created from this file as follows:

TokenNameFinderModel model = new TokenNameFinderModel( 
new File("C:\\OpenNLP Models", "en-ner-person.bin")); 

The NameFinderME class will perform the actual task of finding the name. An instance of this class is created using the TokenNameFinderModel instance, as shown here:

NameFinderME finder = new NameFinderME(model); 

Use a for-each statement to process each sentence, as shown in the following code sequence. The tokenize method will split the sentence into tokens and the find method returns an array of Span objects. These objects store the starting and ending indexes for the names identified by the find method:

for (String sentence : sentences) { 
    String[] tokens = tokenizer.tokenize(sentence); 
    Span[] nameSpans = finder.find(tokens); 
    System.out.println(Arrays.toString( 
    Span.spansToStrings(nameSpans, tokens))); 
} 

When executed, it will generate the following output:

[Tim, Bob Haywood, Adam]  

The primary focus of Chapter 4, Finding People and Things, is name recognition.

Detecting parts of speech

Another way of classifying the parts of text is at the sentence level. A sentence can be decomposed into individual words or combinations of words according to categories, such as nouns, verbs, adverbs, and prepositions. Most of us learned how to do this in school. We also learned not to end a sentence with a preposition, contrary to what we did in the second sentence of this paragraph.

Detecting the POS is useful in other tasks, such as extracting relationships and determining the meaning of text. Determining these relationships is called parsing. POS processing is useful for enhancing the quality of data sent to other elements of a pipeline.

The internals of a POS process can be complex. Fortunately, most of the complexity is hidden from us and encapsulated in classes and methods. We will use a couple of OpenNLP classes to illustrate this process. We will need a model to detect the POS. The POSModel class will be used and instanced using the model found in the en-pos-maxent.bin file, as shown here:

POSModel model = new POSModelLoader().load( 
    new File("../OpenNLP Models/" "en-pos-maxent.bin")); 

The POSTaggerME class is used to perform the actual tagging. Create an instance of this class based on the previous model, as shown here:

POSTaggerME tagger = new POSTaggerME(model); 

Next, declare a string containing the text to be processed:

String sentence = "POS processing is useful for enhancing the "  
   + "quality of data sent to other elements of a pipeline."; 

Here, we will use WhitespaceTokenizer to tokenize the text:

String tokens[] = WhitespaceTokenizer.INSTANCE.tokenize(sentence); 

The tag method is then used to find those parts of speech that stored the results
in an array of strings:

String[] tags = tagger.tag(tokens); 

The tokens and their corresponding tags are then displayed:

for(int i=0; i<tokens.length; i++) { 
    System.out.print(tokens[i] + "[" + tags[i] + "] "); 
} 

When executed, the following output will be produced:

    POS[NNP] processing[NN] is[VBZ] useful[JJ] for[IN] enhancing[VBG] the[DT] quality[NN] of[IN] data[NNS] sent[VBN] to[TO] other[JJ] elements[NNS] of[IN] a[DT] pipeline.[NN]  

Each token is followed by an abbreviation, contained within brackets, for its POS. For example, NNP means that it is a proper noun. These abbreviations will be covered in Chapter 5, Detecting Parts-of-Speech, which is devoted to exploring this topic in depth.

Classifying text and documents

Classification is concerned with assigning labels to information found in text or documents. These labels may or may not be known when the process occurs. When labels are known, the process is called classification. When the labels are unknown, the process is called clustering.

Also of interest in NLP is the process of categorization. This is the process of assigning some text element into one of several possible groups. For example, military aircrafts can be categorized as either fighter, bomber, surveillance, transport, or rescue.

Classifiers can be organized by the type of output they produce. This can be binary, which results in a yes/no output. This type is often used to support spam filters. Other types will result in multiple possible categories.

Classification is more of a process than many of the other NLP tasks. It involves the steps that we will discuss in the Understanding NLP models section. Due to the length of this process, we will not illustrate it here. In Chapter 8, Classifying Text and Documents, we will investigate the classification process and provide a detailed example.

Extracting relationships

Relationship-extraction identifies relationships that exist in text. For example, with the sentence, "The meaning and purpose of life is plain to see," we know that the topic of the sentence is "The meaning and purpose of life." It is related to the last phrase that suggests that it is "plain to see."

Humans can do a pretty good job of determining how things are related to each other, at least at a high level. Determining deep relationships can be more difficult. Using a computer to extract relationships can also be challenging. However, computers can process large datasets to find relationships that would not be obvious to a human or that could not be done in a reasonable period of time.

Numerous relationships are possible. These include relationships such as where something is located, how two people are related to each other, the parts of a system, and who is in charge. Relationship-extraction is useful for a number of tasks, including building knowledge bases, performing trend-analysis, gathering intelligence, and performing product searches. Finding relationships is sometimes called text analytics.

There are several techniques that we can use to perform relationship-extractions. These are covered in more detail in Chapter 10, Using Parser to Extract Relationships. Here, we will illustrate one technique to identify relationships within a sentence using the Stanford NLP StanfordCoreNLP class. This class supports a pipeline where annotators are specified and applied to text. Annotators can be thought of as operations to be performed. When an instance of the class is created, the annotators are added using a Properties object found in the java.util package.

First, create an instance of the Properties class. Then, assign the annotators as follows:

Properties properties = new Properties();         
properties.put("annotators", "tokenize, ssplit, parse"); 

We used three annotators, which specify the operations to be performed. In this case, these are the minimum required to parse the text. The first one, tokenize, will tokenize the text. The ssplit annotator splits the tokens into sentences. The last annotator, parse, performs the syntactic analysis, the parsing of the text.

Next, create an instance of the StanfordCoreNLP class using the properties' reference variable:

StanfordCoreNLP pipeline = new StanfordCoreNLP(properties); 

Then, an Annotation instance is created, which uses the text as its argument:

Annotation annotation = new Annotation( 
    "The meaning and purpose of life is plain to see."); 

Apply the annotate method against the pipeline object to process the annotation object. Finally, use the prettyPrint method to display the result of the processing:

pipeline.annotate(annotation); 
pipeline.prettyPrint(annotation, System.out); 

The output of this code is shown as follows:

    Sentence #1 (11 tokens):
    The meaning and purpose of life is plain to see.
    [Text=The CharacterOffsetBegin=0 CharacterOffsetEnd=3 PartOfSpeech=DT] [Text=meaning CharacterOffsetBegin=4 CharacterOffsetEnd=11 PartOfSpeech=NN] [Text=and CharacterOffsetBegin=12 CharacterOffsetEnd=15 PartOfSpeech=CC] [Text=purpose CharacterOffsetBegin=16 CharacterOffsetEnd=23 PartOfSpeech=NN] [Text=of CharacterOffsetBegin=24 CharacterOffsetEnd=26 PartOfSpeech=IN] [Text=life CharacterOffsetBegin=27 CharacterOffsetEnd=31 PartOfSpeech=NN] [Text=is CharacterOffsetBegin=32 CharacterOffsetEnd=34 PartOfSpeech=VBZ] [Text=plain CharacterOffsetBegin=35 CharacterOffsetEnd=40 PartOfSpeech=JJ] [Text=to CharacterOffsetBegin=41 CharacterOffsetEnd=43 PartOfSpeech=TO] [Text=see CharacterOffsetBegin=44 CharacterOffsetEnd=47 PartOfSpeech=VB] [Text=. CharacterOffsetBegin=47 CharacterOffsetEnd=48 PartOfSpeech=.] 
    (ROOT
      (S
        (NP
          (NP (DT The) (NN meaning)
            (CC and)
            (NN purpose))
          (PP (IN of)
            (NP (NN life))))
        (VP (VBZ is)
          (ADJP (JJ plain)
            (S
              (VP (TO to)
                (VP (VB see))))))
        (. .)))
    
    root(ROOT-0, plain-8)
    det(meaning-2, The-1)
    nsubj(plain-8, meaning-2)
    conj_and(meaning-2, purpose-4)
    prep_of(meaning-2, life-6)
    cop(plain-8, is-7)
    aux(see-10, to-9)
    xcomp(plain-8, see-10)
  

The first part of the output displays the text along with the tokens and POS. This is followed by a tree-like structure that shows the organization of the sentence. The last part shows the relationships between the elements at a grammatical level. Consider the following example:

prep_of(meaning-2, life-6)  

This shows how the preposition, of, is used to relate the words meaning and life. This information is useful for many text-simplification tasks.

Using combined approaches

As suggested earlier, NLP problems often involve using more than one basic NLP task. These are frequently combined in a pipeline to obtain the desired results. We saw one use of a pipeline in the previous section, Extracting relationships.

Most NLP solutions will use pipelines. We will provide several examples of pipelines in Chapter 11, Combined Pipeline.

Understanding NLP models

Regardless of the NLP task being performed or the NLP toolset being used, there are several steps that they all have in common. In this section, we will present these steps. As you go through the chapters and techniques presented in this book, you will see these steps repeated with slight variations. Getting a good understanding of them now will ease the task of learning the techniques.

The basic steps include the following:

  1. Identifying the task
  2. Selecting a model
  3. Building and training the model
  4. Verifying the model
  5. Using the model

We will discuss each of these steps in the following sections.

Identifying the task

It is important to understand the problem that needs to be solved. Based on this understanding, a solution can be devised that consists of a series of steps. Each of these steps will use an NLP task.

For example, suppose we want to answer a query such as, "Who is the mayor of Paris?" We will need to parse the query into the POS, determine the nature of the question, the qualifying elements of the question, and eventually use a repository of knowledge, created using other NLP tasks, to answer the question.

Other problems may not be quite as involved. We might only need to break apart text into components so that the text can be associated with a category. For example, a vendor's product description may be analyzed to determine the potential product categories. The analysis of the description of a car would allow it to be placed into categories such as sedan, sports car, SUV, or compact.

Once you have an idea of what NLP tasks are available, you will be better able to match them with the problem you are trying to solve.

Selecting a model

Many of the tasks that we will examine are based on models. For example, if we need to split a document into sentences, we need an algorithm to do this. However, even the best sentence-boundary-detection techniques have problems doing this correctly every time. This has resulted in the development of models that examine the elements of text and then use this information to determine where sentence breaks occur.

The right model can be dependent on the nature of the text being processed. A model that does well for determining the end of sentences for historical documents might not work well when applied to medical text.

Many models have been created that we can use for the NLP task at hand. Based on the problem that needs to be solved, we can make informed decisions as to which model is the best. In some situations, we might need to train a new model. These decisions frequently involve trade-offs between accuracy and speed. Understanding the problem domain and the required quality of results enables us to select the appropriate model.

Building and training the model

Training a model is the process of executing an algorithm against a set of data, formulating the model, and then verifying the model. We may encounter situations where the text that needs to be processed is significantly different from what we have seen and used before. For example, using models trained with journalistic text might not work well when processing tweets. This may mean that the existing models will not work well with this new data. When this situation arises, we will need to train a new model.

To train a model, we will often use data that has been marked up in such a way that we know the correct answer. For example, if we are dealing with POS tagging, the data will have POS elements (such as nouns and verbs) marked in the data. When the model is being trained, it will use this information to create the model. This dataset is called a corpus.

Verifying the model

Once the model has been created, we need to verify it against a sample set. The typical verification approach is to use a sample set where the correct responses are known. When the model is used with this data, we are able to compare its result to the known good results and assess the quality of the model. Often, only part of a corpus is used for training while the other part is used for verification.

Using the model

Using the model is simply applying the model to the problem at hand. The details are dependent on the model being used. This was illustrated in several of the earlier demonstrations, such as in the Detecting parts of speech section where we used the POS model, as contained in the en-pos-maxent.bin file.

Preparing data

An important step in NLP is finding and preparing the data for processing. This includes the data for training purposes and the data that needs to be processed. There are several factors that need to be considered. Here, we will focus on the support Java provides for working with characters.

We need to consider how characters are represented. Although we will deal primarily with English text, other languages present unique problems. Not only are there differences in how a character can be encoded, the order in which text is read will vary. For example, Japanese orders its text in columns going from right to left.

There are also a number of possible encodings. These include ASCII, Latin, and Unicode to mention a few. A more complete list is found in the following table. Unicode, in particular, is a complex and extensive encoding scheme:

Encoding

Description

ASCII

A character-encoding using 128 (0-127) values.

Latin

There are several Latin variations that uses 256 values. They include various combination of the umlaut, and other characters. Different versions of Latin have been introduced to address various Indo-European languages, such as Turkish and Esperanto.

Big5

A two-byte encoding to address the Chinese character set.

Unicode

There are three encodings for Unicode: UTF-8, UTF-16, and UTF-32. These use 1, 2, and 4 bytes, respectively. This encoding is able to represent all known languages in existence today, including newer languages, such as Klingon and Elvish.

 

Java is capable of handling these encoding schemes. The javac executable's -encoding command-line option is used to specify the encoding scheme to use. In the following command line, the Big5 encoding scheme is specified:

javac -encoding Big5

Character-processing is supported using the primitive char data type, the Character class, and several other classes and interfaces, as summarized in the following table:

Character type

Description

char

Primitive data type.

Character

Wrapper class for char.

CharBuffer

This class supports a buffer of char, providing methods for get/put characters or a sequence of characters operations.

CharSequence

An interface implemented by CharBuffer, Segment, String, StringBuffer, and StringBuilder. It supports read-only access to a sequence of chars.

 

Java also provides a number of classes and interfaces to support strings. These are summarized in the following table. We will use these in many of our examples. The String, StringBuffer, and StringBuilder classes provide similar string-processing capabilities but differ in whether they can be modified and whether they are thread-safe. The CharacterIterator interface and the StringCharacterIterator class provide techniques to traverse character sequences.

The Segment class represents a fragment of text:

Class/interface

Description

String

An immutable string.

StringBuffer

Represents a modifiable string. It is thread-safe.

StringBuilder

Compatible with the StringBuffer class but is
not thread-safe.

Segment

Represents a fragment of text in a character array.
It provides rapid access to character data in an array.

CharacterIterator

Defines an iterator for text. It supports a bidirectional traversal of text.

StringCharacterIterator

A class that implements the CharacterIterator interface for a String.

 

We also need to consider the file format if we are reading from a file. Often, data is obtained from sources where the words are annotated. For example, if we use a web page as the source of text, we will find that it is marked up with HTML tags. These are not necessarily relevant to the analysis process and may need to be removed.

The Multipurpose Internet Mail Extensions (MIME) type is used to characterize the format used by a file. Common file types are listed in the following table. Either we need to explicitly remove or alter the markup found in a file, or use specialized software to deal with it. Some of the NLP APIs provide tools to deal with specialized file formats:

File format

MIME type

Description

Text

Plain/text

Simple text file

Office type Document

Application/MS Word

application/vnd.oasis.opendocument.text

Microsoft Office

Open Office

PDF

Application/PDF

Adobe Portable Document Format

HTML

Text/HTML

Web pages

XML

Text/XML

eXtensible Markup Language

Database

Not applicable

Data can be in a number of different formats

 

Many of the NLP APIs assume that the data is clean. When it is not, it needs to be cleaned, lest we get unreliable and misleading results.

Summary

In this chapter, we introduced NLP and its uses. We found that it is used in many places to solve many different types of problems, ranging from simple searches to sophisticated classification problems. The Java support for NLP in terms of core string support and advanced NLP libraries was presented. The basic NLP tasks were explained and illustrated using code. The basics of deep learning in NLP and feature-engineering were also included to show how deep learning is impacting NLP. We also examined the process of training, verifying, and using models.

In this book, we will lay the foundation for employing basic NLP tasks using both simple and more complex approaches. You may find that some problems require only simple approaches, and when that is the case, knowing how to use the simple techniques may be more than adequate. In other situations, a more sophisticated technique may be needed. In either case, you will be prepared to identify which tool is needed and be able to choose the appropriate technique for the task.

In the next chapter, Chapter 2, Finding Parts of Text, we will examine the process of tokenization and see how it can be used to find parts of text.

Left arrow icon Right arrow icon

Key benefits

  • Use deep learning and NLP techniques in Java to discover hidden insights in text
  • Work with popular Java libraries such as CoreNLP, OpenNLP, and Mallet
  • Explore machine translation, identifying parts of speech, and topic modeling

Description

Natural Language Processing (NLP) allows you to take any sentence and identify patterns, special names, company names, and more. The second edition of Natural Language Processing with Java teaches you how to perform language analysis with the help of Java libraries, while constantly gaining insights from the outcomes. You’ll start by understanding how NLP and its various concepts work. Having got to grips with the basics, you’ll explore important tools and libraries in Java for NLP, such as CoreNLP, OpenNLP, Neuroph, and Mallet. You’ll then start performing NLP on different inputs and tasks, such as tokenization, model training, parts-of-speech and parsing trees. You’ll learn about statistical machine translation, summarization, dialog systems, complex searches, supervised and unsupervised NLP, and more. By the end of this book, you’ll have learned more about NLP, neural networks, and various other trained models in Java for enhancing the performance of NLP applications.

Who is this book for?

Natural Language Processing with Java is for you if you are a data analyst, data scientist, or machine learning engineer who wants to extract information from a language using Java. Knowledge of Java programming is needed, while a basic understanding of statistics will be useful but not mandatory.

What you will learn

  • Understand basic NLP tasks and how they relate to one another
  • Discover and use the available tokenization engines
  • Apply search techniques to find people, as well as things, within a document
  • Construct solutions to identify parts of speech within sentences
  • Use parsers to extract relationships between elements of a document
  • Identify topics in a set of documents
  • Explore topic modeling from a document
Estimated delivery fee Deliver to Lithuania

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 31, 2018
Length: 318 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788993494
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Estimated delivery fee Deliver to Lithuania

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Jul 31, 2018
Length: 318 pages
Edition : 2nd
Language : English
ISBN-13 : 9781788993494
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 107.97
Natural Language Processing with Java
€32.99
Hands-On Natural Language Processing with Python
€32.99
Java Deep Learning Projects
€41.99
Total 107.97 Stars icon

Table of Contents

13 Chapters
Introduction to NLP Chevron down icon Chevron up icon
Finding Parts of Text Chevron down icon Chevron up icon
Finding Sentences Chevron down icon Chevron up icon
Finding People and Things Chevron down icon Chevron up icon
Detecting Part of Speech Chevron down icon Chevron up icon
Representing Text with Features Chevron down icon Chevron up icon
Information Retrieval Chevron down icon Chevron up icon
Classifying Texts and Documents Chevron down icon Chevron up icon
Topic Modeling Chevron down icon Chevron up icon
Using Parsers to Extract Relationships Chevron down icon Chevron up icon
Combined Pipeline Chevron down icon Chevron up icon
Creating a Chatbot Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
(3 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 100%
1 star 0%
Cliente Amazon May 15, 2019
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Libro banale non tecnico. Insegna solo ad usare delle librerie
Amazon Verified review Amazon
Daniel Oct 03, 2018
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
This book explains about the process, but the examples aren’t as detailed. The Python NLP books are way better.
Amazon Verified review Amazon
Marc Lorent Dec 27, 2018
Full star icon Full star icon Empty star icon Empty star icon Empty star icon 2
Aucun des concepts utilisés dans le NLP n'est vraiment expliqué et le livre est ressemble plus â une javadoc des API mentionnées qu'à un véritable livre sur le sujet. A vrai dire il donne l'impression d'être une compilation d'extraits d'articles de Wikimedia.Le seul vrai intérêt de ce livre sont les liens Internet qu'il contient ...
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela