Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python 3 Text Processing with NLTK 3 Cookbook

You're reading from   Python 3 Text Processing with NLTK 3 Cookbook

Arrow left icon
Product type Paperback
Published in Aug 2014
Publisher
ISBN-13 9781782167853
Length 304 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Jacob Perkins Jacob Perkins
Author Profile Icon Jacob Perkins
Jacob Perkins
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Tokenizing Text and WordNet Basics FREE CHAPTER 2. Replacing and Correcting Words 3. Creating Custom Corpora 4. Part-of-speech Tagging 5. Extracting Chunks 6. Transforming Chunks and Trees 7. Text Classification 8. Distributed Processing and Handling Large Datasets 9. Parsing Specific Data Types A. Penn Treebank Part-of-speech Tags
Index

Discovering word collocations

Collocations are two or more words that tend to appear frequently together, such as United States. Of course, there are many other words that can come after United, such as United Kingdom and United Airlines. As with many aspects of natural language processing, context is very important. And for collocations, context is everything!

In the case of collocations, the context will be a document in the form of a list of words. Discovering collocations in this list of words means that we'll find common phrases that occur frequently throughout the text. For fun, we'll start with the script for Monty Python and the Holy Grail.

Getting ready

The script for Monty Python and the Holy Grail is found in the webtext corpus, so be sure that it's unzipped at nltk_data/corpora/webtext/.

How to do it...

We're going to create a list of all lowercased words in the text, and then produce BigramCollocationFinder, which we can use to find bigrams, which are pairs of words. These bigrams are found using association measurement functions in the nltk.met rics package, as follows:

>>> from nltk.corpus import webtext
>>> from nltk.collocations import BigramCollocationFinder
>>> from nltk.metrics import BigramAssocMeasures
>>> words = [w.lower() for w in webtext.words('grail.txt')]
>>> bcf = BigramCollocationFinder.from_words(words)
>>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4)
[("'", 's'), ('arthur', ':'), ('#', '1'), ("'", 't')]

Well, that's not very useful! Let's refine it a bit by adding a word filter to remove punctuation and stopwords:

>>> from nltk.corpus import stopwords
>>> stopset = set(stopwords.words('english'))
>>> filter_stops = lambda w: len(w) < 3 or w in stopset
>>> bcf.apply_word_filter(filter_stops)
>>> bcf.nbest(BigramAssocMeasures.likelihood_ratio, 4)
[('black', 'knight'), ('clop', 'clop'), ('head', 'knight'), ('mumble', 'mumble')]

Much better, we can clearly see four of the most common bigrams in Monty Python and the Holy Grail. If you'd like to see more than four, simply increase the number to whatever you want, and the collocation finder will do its best.

How it works...

BigramCollocationFinder constructs two frequency distributions: one for each word, and another for bigrams. A frequency distribution, or FreqDist in NLTK, is basically an enhanced Python dictionary where the keys are what's being counted, and the values are the counts. Any filtering functions that are applied reduce the size of these two FreqDists by eliminating any words that don't pass the filter. By using a filtering function to eliminate all words that are one or two characters, and all English stopwords, we can get a much cleaner result. After filtering, the collocation finder is ready to accept a generic scoring function for finding collocations.

There's more...

In addition to BigramCollocationFinder, there's also TrigramCollocationFinder, which finds triplets instead of pairs. This time, we'll look for trigrams in Australian singles advertisements with the help of the following code:

>>> from nltk.collocations import TrigramCollocationFinder
>>> from nltk.metrics import TrigramAssocMeasures
>>> words = [w.lower() for w in webtext.words('singles.txt')]
>>> tcf = TrigramCollocationFinder.from_words(words)
>>> tcf.apply_word_filter(filter_stops)
>>> tcf.apply_freq_filter(3)
>>> tcf.nbest(TrigramAssocMeasures.likelihood_ratio, 4)
[('long', 'term', 'relationship')]

Now, we don't know whether people are looking for a long-term relationship or not, but clearly it's an important topic. In addition to the stopword filter, I also applied a frequency filter, which removed any trigrams that occurred less than three times. This is why only one result was returned when we asked for four because there was only one result that occurred more than two times.

Scoring functions

There are many more scoring functions available besides likelihood_ratio(). But other than raw_freq(), you may need a bit of a statistics background to understand how they work. Consult the NLTK API documentation for NgramAssocMeasures in the nltk.metrics package to see all the possible scoring functions.

Scoring ngrams

In addition to the nbest() method, there are two other ways to get ngrams (a generic term used for describing bigrams and trigrams) from a collocation finder:

  • above_score(score_fn, min_score): This can be used to get all ngrams with scores that are at least min_score. The min_score value that you choose will depend heavily on the score_fn you use.
  • score_ngrams(score_fn): This will return a list with tuple pairs of (ngram, score). This can be used to inform your choice for min_score.

See also

The nltk.metrics module will be used again in the Measuring precision and recall of a classifier and Calculating high information words recipes in Chapter 7, Text Classification.

You have been reading a chapter from
Python 3 Text Processing with NLTK 3 Cookbook - Second Edition
Published in: Aug 2014
Publisher:
ISBN-13: 9781782167853
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime