Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Data Analysis Cookbook

You're reading from   Practical Data Analysis Cookbook Over 60 practical recipes on data exploration and analysis

Arrow left icon
Product type Paperback
Published in Apr 2016
Publisher
ISBN-13 9781783551668
Length 384 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Tomasz Drabas Tomasz Drabas
Author Profile Icon Tomasz Drabas
Tomasz Drabas
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Preparing the Data 2. Exploring the Data FREE CHAPTER 3. Classification Techniques 4. Clustering Techniques 5. Reducing Dimensions 6. Regression Methods 7. Time Series Techniques 8. Graphs 9. Natural Language Processing 10. Discrete Choice Models 11. Simulations Index

Identifying the topic of an article


Counting words is a very popular and simple technique that normally renders good results if you want to get a feeling for the topic of the body of text. In this recipe, we will show you how to count the words from The Seattle Times article we have been working with so far to identify the topic of the article without even reading it.

Getting ready

To execute this recipe, you will need NLTK, the regular expressions module from Python, NumPy, and Matplotlib. No other prerequisites are required.

How to do it…

The beginning of the code for this recipe is very similar to the one presented in the previous recipe so we will present only the relevant parts (the nlp_countWords.py file):

# part-of-speech tagging
tagged_sentences = [nltk.pos_tag(w) for w in tokenized]

# extract names entities -- regular expressions approach
tagged = []

pattern = '''
    ENT: {<DT>?(<NNP|NNPS>)+}
'''

tokenizer = nltk.RegexpParser(pattern)

for sent in tagged_sentences:
 ...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime