Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Hands-On Ensemble Learning with R
Hands-On Ensemble Learning with R

Hands-On Ensemble Learning with R: A beginner's guide to combining the power of machine learning algorithms using ensemble techniques

eBook
$35.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Hands-On Ensemble Learning with R

Chapter 2. Bootstrapping

As seen in the previous chapter, statistical inference is enhanced to a very large extent with the use of computational power. We also looked at the process of permutation tests, wherein the same test is applied multiple times for the resamples of the given data under the (null) hypothesis. The rationale behind resampling methods is also similar; we believe that if the sample is truly random and the observations are generated from the same identical distribution, we have a valid reason to resample the same set of observations with replacements. This is because any observation might as well occur multiple times rather than as a single instance.

This chapter will begin with a formal definition of resampling, followed by a look at the jackknife technique. This will be applied to multiple, albeit relatively easier, problems, and we will look at the definition of the pseudovalues first. The bootstrap method, invented by Efron, is probably the most useful resampling...

Technical requirements

We will be using the following libraries in the chapter:

  • ACSWR
  • boot
  • car
  • gee
  • mvtnorm
  • pseudo
  • RSADBE
  • survival

The jackknife technique

Quenouille (1949) invented the jackknife technique. The purpose of this was to reduce bias by looking at multiple samples of data in a methodical way. The name jackknife seems to have been coined by the well-known statistician John W. Tukey. Due mainly to the lack of computational power, the advances and utility of the jackknife method were restricted. Efron invented the bootstrap method in 1979 (see the following section for its applications) and established the connection with the jackknife method. In fact, these two methods have a lot in common and are generally put under the umbrella of resampling methods.

Suppose that we draw a random sample The jackknife techniqueof size n from a probability distribution F, and we denote by The jackknife technique the parameter of interest. Let The jackknife technique be an estimator of The jackknife technique, and here we don't have the probability distribution of The jackknife technique for a given The jackknife technique. Resampling methods will help in carrying out statistical inference when the probability distribution is unknown. A formal definition...

Bootstrap – a statistical method

In this section, we will explore complex statistical functional. What is the statistical distribution of the correlation between two random variables? If normality assumption does not hold for the multivariate data, then what is an alternative way to obtain the standard error and confidence interval? Efron (1979) invented the bootstrap technique, which provides the solutions that enable statistical inference related to complex statistical functionals. In Chapter 1, Introduction to Ensemble Techniques, the permutation test, which repeatedly draws samples of the given sample and carries out the test for each of the resamples, was introduced. In theory, the permutation test requires Bootstrap – a statistical method number of resamples, where m and n are the number of observations in the two samples, though one does take their foot off the pedal after having enough resamples. The bootstrap method works in a similar way and is an important resampling method.

Let Bootstrap – a statistical method be an independent random...

The boot package

The boot package is one of the core R packages, and it is optimized for the implementation of bootstrap methods. In the previous examples, we mostly used loops for carrying out the resampling technique. Here, we will look at how to use the boot R package.

The main structure of the boot function is as follows:

boot(data, statistic, R, sim = "ordinary", stype = c("i", "f", "w"), 
     strata = rep(1,n), L = NULL, m = 0, weights = NULL, 
     ran.gen = function(d, p) d, mle = NULL, simple = FALSE, ...,
     parallel = c("no", "multicore", "snow"),
     ncpus = getOption("boot.ncpus", 1L), cl = NULL)

The central arguments of the function are data, statistic, R, and stype. The data argument is the standard one, as with most R functions. The statistic is the most important argument for the implementation of the boot function and it is this function that will be applied on the bootstrap samples obtained...

Bootstrap and testing hypotheses

We begin the bootstrap hypothesis testing problems with the t-test to compare means and the F-test to compare variances. It is understood that, since we are assuming normal distribution for the two populations under comparison, the distributional properties of the test statistics are well known. To carry out the nonparametric bootstrap for the t-statistic based on the t-test, we first define the function, and then run the bootstrap function boot on the Galton dataset. The Galton dataset is available in the galton data.frame from the RSADBE package. The galton dataset consists of 928 pairs of observations, with the pair consisting of the height of the parent and the height of their child. First, we define the t2 function, load the Galton dataset, and run the boot function as the following unfolds:

> t2 <- function(data,i) {
+   p <- t.test(data[i,1],data[i,2],var.equal=TRUE)$statistic
+   p
+ }
> data(galton)
> gt <- boot(galton,t2,R=100...

Bootstrapping regression models

The US Crime dataset introduced in Chapter 1, Introduction to Ensemble Techniques, is an example of why the linear regression model might be a good fit. In this example, we are interested in understanding the crime rate (R) as a function of thirteen related variables such as average age, the southern state indicator, and so on. Mathematically, the linear regression model is as follows:

Bootstrapping regression models

Here, Bootstrapping regression models are the p-covariates, Bootstrapping regression models is the intercept term, Bootstrapping regression models are the regression coefficients, and Bootstrapping regression models is the error term assumed to follow a normal distribution Bootstrapping regression models. The covariates can be written in a vector form and the ith observation can be summarized as Bootstrapping regression models, where Bootstrapping regression models. The n observations Bootstrapping regression models, are assumed to be stochastically independent. The linear regression model has been detailed in many classical regression books; see Draper and Smith (1999), for instance. A recent book that details the implementation of the linear regression model in R is Ciaburro (2018). As the reader might have guessed...

Bootstrapping survival models*

In the first section, we looked at the role of pseudovalues in carrying out inference related to survival data. The main idea behind the use of pseudovalues is to replace the incomplete observations with an appropriate (expected) value and then use the flexible framework of the generalized estimating equation. Survival analysis and the related specialized methods for it will be detailed in Chapter 10, Ensembling Survival Models, of the book. We will briefly introduce the notation here as required to set up the parameters. Let T denote the survival time, or the time to the event of interest, and we naturally have Bootstrapping survival models*, which is a continuous random variable. Suppose that the lifetime cumulative distribution is F and the associated density function is f. Since the lifetimes T are incomplete for some of the observations and subject to censoring, we will not be able to properly infer about interesting parameters such as mean survival time or median survival time. Since...

Technical requirements


We will be using the following libraries in the chapter:

  • ACSWR

  • boot

  • car

  • gee

  • mvtnorm

  • pseudo

  • RSADBE

  • survival

The jackknife technique


Quenouille (1949) invented the jackknife technique. The purpose of this was to reduce bias by looking at multiple samples of data in a methodical way. The name jackknife seems to have been coined by the well-known statistician John W. Tukey. Due mainly to the lack of computational power, the advances and utility of the jackknife method were restricted. Efron invented the bootstrap method in 1979 (see the following section for its applications) and established the connection with the jackknife method. In fact, these two methods have a lot in common and are generally put under the umbrella of resampling methods.

Suppose that we draw a random sample of size n from a probability distribution F, and we denote by the parameter of interest. Let be an estimator of , and here we don't have the probability distribution of for a given . Resampling methods will help in carrying out statistical inference when the probability distribution is unknown. A formal definition of the...

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • •Implement machine learning algorithms to build ensemble-efficient models
  • •Explore powerful R packages to create predictive models using ensemble methods
  • •Learn to build ensemble models on large datasets using a practical approach

Description

Ensemble techniques are used for combining two or more similar or dissimilar machine learning algorithms to create a stronger model. Such a model delivers superior prediction power and can give your datasets a boost in accuracy. Hands-On Ensemble Learning with R begins with the important statistical resampling methods. You will then walk through the central trilogy of ensemble techniques – bagging, random forest, and boosting – then you'll learn how they can be used to provide greater accuracy on large datasets using popular R packages. You will learn how to combine model predictions using different machine learning algorithms to build ensemble models. In addition to this, you will explore how to improve the performance of your ensemble models. By the end of this book, you will have learned how machine learning algorithms can be combined to reduce common problems and build simple efficient ensemble models with the help of real-world examples.

Who is this book for?

This book is for you if you are a data scientist or machine learning developer who wants to implement machine learning techniques by building ensemble models with the power of R. You will learn how to combine different machine learning algorithms to perform efficient data processing. Basic knowledge of machine learning techniques and programming knowledge of R would be an added advantage.

What you will learn

  • •Carry out an essential review of re-sampling methods, bootstrap, and jackknife
  • •Explore the key ensemble methods: bagging, random forests, and boosting
  • •Use multiple algorithms to make strong predictive models
  • •Enjoy a comprehensive treatment of boosting methods
  • •Supplement methods with statistical tests, such as ROC
  • •Walk through data structures in classification, regression, survival, and time series data
  • •Use the supplied R code to implement ensemble methods
  • •Learn stacking method to combine heterogeneous machine learning models

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 27, 2018
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781788624145
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 27, 2018
Length: 376 pages
Edition : 1st
Language : English
ISBN-13 : 9781788624145
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 131.97
Hands-On Time Series Analysis with R
$38.99
Hands-On Ensemble Learning with R
$48.99
R Deep Learning Essentials
$43.99
Total $ 131.97 Stars icon

Table of Contents

14 Chapters
1. Introduction to Ensemble Techniques Chevron down icon Chevron up icon
2. Bootstrapping Chevron down icon Chevron up icon
3. Bagging Chevron down icon Chevron up icon
4. Random Forests Chevron down icon Chevron up icon
5. The Bare Bones Boosting Algorithms Chevron down icon Chevron up icon
6. Boosting Refinements Chevron down icon Chevron up icon
7. The General Ensemble Technique Chevron down icon Chevron up icon
8. Ensemble Diagnostics Chevron down icon Chevron up icon
9. Ensembling Regression Models Chevron down icon Chevron up icon
10. Ensembling Survival Models Chevron down icon Chevron up icon
11. Ensembling Time Series Models Chevron down icon Chevron up icon
12. What's Next? Chevron down icon Chevron up icon
A. Bibliography Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
(1 Ratings)
5 star 0%
4 star 0%
3 star 100%
2 star 0%
1 star 0%
Grady Heller Aug 30, 2023
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The book itself was poorly printed and the content is basically just code put together without sufficient explanation and is more of a basic cook book.Dislikes- print was straight up BAD with print errors and some illegible charts, weird grammar here and there, and a number of things are just not explained by the author - if I didn't already know machine learning to some extent I'd be lost and it can also just be very hard to follow the author and that's a rough combination.Likes - does provide some intro info on ensembles, lots of basic coding examples, includes sections on survival and time series applications to help round out the content, and I do find the book to be practical if you just want to get a start with ensemble techniques after learning basic machine learning techniques.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.