Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Raspberry Pi 3 Cookbook for Python Programmers - Third Edition

You're reading from  Raspberry Pi 3 Cookbook for Python Programmers - Third Edition

Product type Book
Published in Apr 2018
Publisher
ISBN-13 9781788629874
Pages 552 pages
Edition 3rd Edition
Languages
Authors (2):
Steven Lawrence Fernandes Steven Lawrence Fernandes
Profile icon Steven Lawrence Fernandes
Tim Cox Tim Cox
Profile icon Tim Cox
View More author details
Toc

Table of Contents (23) Chapters close

Title Page
Copyright and Credits
Dedication
Packt Upsell
Contributors
Preface
1. Getting Started with a Raspberry Pi 3 Computer 2. Dividing Text Data and Building Text Classifiers 3. Using Python for Automation and Productivity 4. Predicting Sentiments in Words 5. Creating Games and Graphics 6. Detecting Edges and Contours in Images 7. Creating 3D Graphics 8. Building Face Detector and Face Recognition Applications 9. Using Python to Drive Hardware 10. Sensing and Displaying Real-World Data 11. Building Neural Network Modules for Optical Character Recognition 12. Building Robots 13. Interfacing with Technology 14. Can I Recommend a Movie for You? 1. Hardware and Software List 2. Other Books You May Enjoy Index

Pre-processing data using tokenization


The pre-processing of data involves converting the existing text into acceptable information for the learning algorithm.

Tokenization is the process of dividing text into a set of meaningful pieces. These pieces are called tokens.

How to do it...

  1. Introduce sentence tokenization:
from nltk.tokenize import sent_tokenize
  1. Form a new text tokenizer:
tokenize_list_sent = sent_tokenize(text)
print "nSentence tokenizer:" 
print tokenize_list_sent 
  1. Form a new word tokenizer:
from nltk.tokenize import word_tokenize 
print "nWord tokenizer:" 
print word_tokenize(text) 
  1. Introduce a new WordPunct tokenizer:
from nltk.tokenize import WordPunctTokenizer 
word_punct_tokenizer = WordPunctTokenizer() 
print "nWord punct tokenizer:" 
print word_punct_tokenizer.tokenize(text) 

The result obtained by the tokenizer is shown here. It divides a sentence into word groups:

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime}