Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Python Natural Language Processing

You're reading from   Python Natural Language Processing Advanced machine learning and deep learning techniques for natural language processing

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787121423
Length 486 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Jalaj Thanaki Jalaj Thanaki
Author Profile Icon Jalaj Thanaki
Jalaj Thanaki
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Introduction FREE CHAPTER 2. Practical Understanding of a Corpus and Dataset 3. Understanding the Structure of a Sentences 4. Preprocessing 5. Feature Engineering and NLP Algorithms 6. Advanced Feature Engineering and NLP Algorithms 7. Rule-Based System for NLP 8. Machine Learning for NLP Problems 9. Deep Learning for NLU and NLG Problems 10. Advanced Tools 11. How to Improve Your NLP Skills 12. Installation Guide

Handling corpus-raw sentences

In the previous section, we were processing on raw text and looked at concepts at the sentence level. In this section, we are going to look at the concepts of tokenization, lemmatization, and so on at the word level.

Word tokenization

Word tokenization is defined as the process of chopping a stream of text up into words, phrases, and meaningful strings. This process is called word tokenization. The output of the process are words that we will get as an output after tokenization. This is called a token.

Let's see the code snippet given in Figure 4.11 of tokenized words:

Figure 4.11: Word tokenized code snippet

The output of the code given in Figure 4.11 is as follows:

The input for word tokenization is:

Stemming is funnier than a bummer says the sushi loving computer scientist.She really wants to buy cars. She told me angrily. It is better for you.Man is walking. We are meeting tomorrow. You really don''t know..! 

The output for word tokenization is:

[''Stemming'', ''is'', ''funnier'', ''than'', ''a'', ''bummer'', ''says'', ''the'', ''sushi'', ''loving'', ''computer'', ''scientist'', ''.'', ''She'', ''really'', ''wants'', ''to'', ''buy'', ''cars'', ''.'', ''She'', ''told'', ''me'', ''angrily'', ''.'', ''It'', ''is'', ''better'', ''for'', ''you'', ''.'', ''Man'', ''is'', ''walking'', ''.'', ''We'', ''are'', ''meeting'', ''tomorrow'', ''.'', ''You'', ''really'', ''do'', ""n''t"", ''know..'', ''!''] 

Challenges for word tokenization

If you analyze the preceding output, then you can observe that the word don't is tokenized as do, n't know. Tokenizing these kinds of words is pretty painful using the word_tokenize of nltk.

To solve the preceding problem, you can write exception codes and improvise the accuracy. You need to write pattern matching rules, which solve the defined challenge, but are so customized and vary from application to application.

Another challenge involves some languages such as Urdu, Hebrew, Arabic, and so on. They are quite difficult in terms of deciding on the word boundary and find out meaningful tokens from the sentences.

Word lemmatization

Word lemmatization is the same concept that we defined in the first section. We will just do a quick revision of it and then we will implement lemmatization on the word level.

Word lemmatization is converting a word from its inflected form to its base form. In word lemmatization, we consider the POS tags and, according to the POS tags, we can derive the base form which is available to the lexical WordNet.

You can find the code snippet in Figure 4.12:

Figure 4.12: Word lemmatization code snippet

The output of the word lemmatization is as follows:

Input is: wordlemma.lemmatize(''cars'')  Output is: car 
Input is: wordlemma.lemmatize(''walking'',pos=''v'') Output is: walk 
Input is: wordlemma.lemmatize(''meeting'',pos=''n'') Output is: meeting 
Input is: wordlemma.lemmatize(''meeting'',pos=''v'') Output is: meet 
Input is: wordlemma.lemmatize(''better'',pos=''a'') Output is: good 

Challenges for word lemmatization

It is time consuming to build a lexical dictionary. If you want to build a lemmatization tool that can consider a larger context, taking into account the context of preceding sentences, it is still an open area in research.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime