Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning with R Cookbook, Second Edition

You're reading from   Machine Learning with R Cookbook, Second Edition Analyze data and build predictive models

Arrow left icon
Product type Paperback
Published in Oct 2017
Publisher Packt
ISBN-13 9781787284395
Length 572 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Ashish Bhatia Ashish Bhatia
Author Profile Icon Ashish Bhatia
Ashish Bhatia
Yu-Wei, Chiu (David Chiu) Yu-Wei, Chiu (David Chiu)
Author Profile Icon Yu-Wei, Chiu (David Chiu)
Yu-Wei, Chiu (David Chiu)
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Practical Machine Learning with R FREE CHAPTER 2. Data Exploration with Air Quality Datasets 3. Analyzing Time Series Data 4. R and Statistics 5. Understanding Regression Analysis 6. Survival Analysis 7. Classification 1 - Tree, Lazy, and Probabilistic 8. Classification 2 - Neural Network and SVM 9. Model Evaluation 10. Ensemble Learning 11. Clustering 12. Association Analysis and Sequence Mining 13. Dimension Reduction 14. Big Data Analysis (R and Hadoop)

Handling missing data and split and surrogate variables


Missing data can be a curse for analysis and prediction. It leads to an inaccurate inference from data. One simple way to handle missing data is to refuse to take missing data in to account by simply ignoring it or removing it from the dataset. This approach seems good, but not in an efficient way. If the number of missing values is less than 5 percent of a total dataset then discarding such data will not affect the whole dataset.

Getting ready

This recipe will familiarize us with using mice packages for filling missing values.

How to do it...

Perform the following steps in R:

  1. Find the minimum cross-validation error of the classification tree model:
        > install.packages("mice")
        > install.packages("randomForest")
        > install.packages("VIM")
        > t = data.frame(x=c(1:100), y=c(1:100))  
        > t$x[sample(1:100,10)]=NA
        > t$y[sample(1:100,20)]=NA
        > aggr(t)
  1. Tweaking the aggr function...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime