Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Artificial Intelligence with Python

You're reading from   Artificial Intelligence with Python A Comprehensive Guide to Building Intelligent Apps for Python Beginners and Developers

Arrow left icon
Product type Paperback
Published in Jan 2017
Publisher Packt
ISBN-13 9781786464392
Length 446 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Prateek Joshi Prateek Joshi
Author Profile Icon Prateek Joshi
Prateek Joshi
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Introduction to Artificial Intelligence FREE CHAPTER 2. Classification and Regression Using Supervised Learning 3. Predictive Analytics with Ensemble Learning 4. Detecting Patterns with Unsupervised Learning 5. Building Recommender Systems 6. Logic Programming 7. Heuristic Search Techniques 8. Genetic Algorithms 9. Building Games With Artificial Intelligence 10. Natural Language Processing 11. Probabilistic Reasoning for Sequential Data 12. Building A Speech Recognizer 13. Object Detection and Tracking 14. Artificial Neural Networks 15. Reinforcement Learning 16. Deep Learning with Convolutional Neural Networks

Tokenizing text data

When we deal with text, we need to break it down into smaller pieces for analysis. This is where tokenization comes into the picture. It is the process of dividing the input text into a set of pieces like words or sentences. These pieces are called tokens. Depending on what we want to do, we can define our own methods to divide the text into many tokens. Let's take a look at how to tokenize the input text using NLTK.

Create a new Python file and import the following packages:

from nltk.tokenize import sent_tokenize, \ 
        word_tokenize, WordPunctTokenizer 

Define some input text that will be used for tokenization:

# Define input text 
input_text = "Do you know how tokenization works? It's actually quite interesting! Let's analyze a couple of sentences and figure it out."  

Divide the input text into sentence tokens:

# Sentence tokenizer  
print("\nSentence tokenizer:") 
print(sent_tokenize(input_text)) 

Divide the input text into...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image