Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Data Analysis Using Jupyter Notebook

You're reading from   Practical Data Analysis Using Jupyter Notebook Learn how to speak the language of data by extracting useful and actionable insights using Python

Arrow left icon
Product type Paperback
Published in Jun 2020
Publisher Packt
ISBN-13 9781838826031
Length 322 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Marc Wintjen Marc Wintjen
Author Profile Icon Marc Wintjen
Marc Wintjen
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Data Analysis Essentials
2. Fundamentals of Data Analysis FREE CHAPTER 3. Overview of Python and Installing Jupyter Notebook 4. Getting Started with NumPy 5. Creating Your First pandas DataFrame 6. Gathering and Loading Data in Python 7. Section 2: Solutions for Data Discovery
8. Visualizing and Working with Time Series Data 9. Exploring, Cleaning, Refining, and Blending Datasets 10. Understanding Joins, Relationships, and Aggregates 11. Plotting, Visualization, and Storytelling 12. Section 3: Working with Unstructured Big Data
13. Exploring Text Data and Unstructured Data 14. Practical Sentiment Analysis 15. Bringing It All Together 16. Works Cited
17. Other Books You May Enjoy

Tokenization explained

Tokenization is the process of breaking unstructured text such as paragraphs, sentences, or phrases down into a list of text values called tokens. A token is the lowest unit used by NLP functions to help to identify and work with the data. The process creates a natural hierarchy to help to identify the relationship from the highest to the lowest unit. Depending on the source data, the token could represent a word, sentence, or individual character.

The process to tokenize a body of text, sentence, or phrase, typically starts with breaking apart words using the white space in between them. However, to correctly identify each token accurately requires the library package to account for exceptions such as hyphens, apostrophes, and a language dictionary, to ensure the value is properly identified. Hence, tokenization requires the language of origin of the text to be known to process it. Google Translate, for example, is an NLP solution that can identify...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime