Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Machine Learning with R

You're reading from   Mastering Machine Learning with R Advanced machine learning techniques for building smart applications with R 3.5

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher
ISBN-13 9781789618006
Length 354 pages
Edition 3rd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Cory Lesmeister Cory Lesmeister
Author Profile Icon Cory Lesmeister
Cory Lesmeister
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Preparing and Understanding Data 2. Linear Regression FREE CHAPTER 3. Logistic Regression 4. Advanced Feature Selection in Linear Models 5. K-Nearest Neighbors and Support Vector Machines 6. Tree-Based Classification 7. Neural Networks and Deep Learning 8. Creating Ensembles and Multiclass Methods 9. Cluster Analysis 10. Principal Component Analysis 11. Association Analysis 12. Time Series and Causality 13. Text Mining 14. Creating a Package 15. Other Books You May Enjoy

Overview

If you haven't been exposed to large, messy datasets, then be patient, for it's only a matter of time. If you've encountered such data, has it been in a domain where you have little subject matter expertise? If not, then once again I proffer that it's only a matter of time. Some of the common problems that make up this term messy data include the following:

  • Missing or invalid values
  • Novel levels in a categorical feature that show up in algorithm production
  • High cardinality in categorical features such as zip codes
  • High dimensionality
  • Duplicate observations

So this begs the question what are we to do? Well, first we need to look at what are the critical tasks that need to be performed during this phase of the process. The following tasks serve as the foundation for building a learning algorithm. They're from the paper by SPSS, CRISP-DM 1.0, a step-by-step data-mining guide available at https://the-modeling-agency.com/crisp-dm.pdf:

  • Data understanding:
    1. Collect
    2. Describe
    3. Explore
    4. Verify
  • Data preparation:
    1. Select
    2. Clean
    3. Construct
    4. Integrate
    5. Format

Certainly this is an excellent enumeration of the process, but what do we really need to do? I propose that, in practical terms we can all relate to, the following must be done once the data is joined and loaded into your machine, cloud, or whatever you use:

  • Understand the data structure
  • Dedupe observations
  • Eliminate zero variance features and low variance features as desired
  • Handle missing values
  • Create dummy features (one-hot encoding)
  • Examine and deal with highly correlated features and those with perfect linear relationships
  • Scale as necessary
  • Create other features as desired

Many feel that this is a daunting task. I don't and, in fact, I quite enjoy it. If done correctly and with a judicious application of judgment, it should reduce the amount of time spent at this first stage of a project and facilitate training your learning algorithm. None of the previous steps are challenging, but it can take quite a bit of time to write the code to perform each task.

Well, that's the benefit of this chapter. The example to follow will walk you through the tasks and the R code that accomplishes it. The code is flexible enough that you should be able to apply it to your projects. Additionally, it will help you gain an understanding of the data at a point you can intelligently discuss it with Subject Matter Experts (SMEs) if, in fact, they're available.

In the practical exercise that follows, we'll work with a small dataset. However, it suffers from all of the problems described earlier. Don't let the small size fool you, as we'll take what we learn here and use it for the more massive datasets to come in subsequent chapters.

As background, the data we'll use I put together painstakingly by hand. It's the Order of Battle for the opposing armies at the Battle of Gettysburg, fought during the American Civil War, July 1st-3rd, 1863, and the casualties reported by the end of the day on July 3rd. I purposely chose this data because I'm reasonably sure you know very little about it. Don't worry, I'm the SME on the battle here and will walk you through it every step of the way. The one thing that we won't cover in this chapter is dealing with large volumes of textual features, which we'll discuss later in this book. Enough said already; let's get started!

The source used in the creation of the dataset is The Gettysburg Campaign in Numbers and Losses: Synopses, Orders of Battle, Strengths, Casualties, and Maps, June 9-July 14, 1863, by J. David Petruzzi and Steven A. Stanley.
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime