Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Science Algorithms in a Week

You're reading from   Data Science Algorithms in a Week Top 7 algorithms for scientific computing, data analysis, and machine learning

Arrow left icon
Product type Paperback
Published in Oct 2018
Publisher Packt
ISBN-13 9781789806076
Length 214 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
David Toth David Toth
Author Profile Icon David Toth
David Toth
David Natingga David Natingga
Author Profile Icon David Natingga
David Natingga
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Classification Using K-Nearest Neighbors 2. Naive Bayes FREE CHAPTER 3. Decision Trees 4. Random Forests 5. Clustering into K Clusters 6. Regression 7. Time Series Analysis 8. Python Reference 9. Statistics 10. Glossary of Algorithms and Methods in Data Science
11. Other Books You May Enjoy

Text classification – k-NN in higher dimensions

Suppose we are given documents and would like to classify other documents based on their word frequency counts. For example, the 120 most frequently occurring words found in the Project Gutenberg e-book of the King James Bible are as follows:

The task is to design a metric that, given the word frequencies for each document, would accurately determine how semantically close those documents are. Consequently, such a metric could be used by the k-NN algorithm to classify the unknown instances in the new documents based on the existing documents.

Analysis

Suppose that we consider, for example, N most frequent words in our corpus of documents. Then, we count the word frequencies for each of the N words in a given document and put them in an N-dimensional vector that will represent that document. Then, we define a distance between two documents to be the distance (for example, Euclidean) between the two-word frequency vectors of those documents.

The problem with this solution is that only certain words represent the actual content of the book, and others need to be present in the text because of grammar rules or their general basic meaning. For example, out of the 120 most frequently encountered words in the Bible, each word is of different importance. In the following table, we have highlighted the words that have both a high frequency in the Bible and an important meaning:

  1. lord - used 1.00%
  2. god - 0.56%
  1. Israel - 0.32%
  2. king - 0.32%
  1. David - 0.13%
  2. Jesus - 0.12%

 

These words are less likely to be present in mathematical texts, for example, but more likely to be present in texts concerned with religion or Christianity.

However, if we just look at the six most frequent words in the Bible, they happen to be less useful with regard to detecting the meaning of the text:

  1. the - 8.07%
  2. and - 6.51%
  1. of - 4.37%
  2. to - 1.72%
  1. that - 1.63%
  2. in - 1.60%

 

Texts concerned with mathematics, literature, and other subjects will have similar frequencies for these words. Differences may result mostly from the writing style.

Therefore, to determine a similarity distance between two documents, we only need to look at the frequency counts of the important words. Some words are less important—these dimensions are better reduced, as their inclusion can lead to misinterpretation of the results in the end. Thus, what we are left to do is choose the words (dimensions) that are important to classify the documents in our corpus. For this, consult Problem 6.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime