Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
scikit-learn Cookbook , Second Edition
scikit-learn Cookbook , Second Edition

scikit-learn Cookbook , Second Edition: Over 80 recipes for machine learning in Python with scikit-learn , Second Edition

Arrow left icon
Profile Icon Trent Hauck
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (3 Ratings)
Paperback Nov 2017 374 pages 2nd Edition
eBook
Can$30.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial
Arrow left icon
Profile Icon Trent Hauck
Arrow right icon
Free Trial
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7 (3 Ratings)
Paperback Nov 2017 374 pages 2nd Edition
eBook
Can$30.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial
eBook
Can$30.99 Can$44.99
Paperback
Can$55.99
Subscription
Free Trial

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

scikit-learn Cookbook , Second Edition

Pre-Model Workflow and Pre-Processing

In this chapter we will see the following recipes:

  • Creating sample data for toy analysis
  • Scaling data to the standard normal distribution
  • Creating binary features through thresholding
  • Working with categorical variables
  • Imputing missing values through various strategies
  • A linear model in the presence of outliers
  • Putting it all together with pipelines
  • Using Gaussian processes for regression
  • Using SGD for regression

Introduction

What is data, and what are we doing with it?

A simple answer is that we attempt to place our data as points on paper, graph them, think, and look for simple explanations that approximate the data well. The simple geometric line of F=ma (force being proportional to acceleration) explained a lot of noisy data for hundreds of years. I tend to think of data science as data compression at times.

Sometimes, when a machine is given only win-lose outcomes (of winning games of checkers, for example) and trained, I think of artificial intelligence. It is never taught explicit directions on how to play to win in such a case.

This chapter deals with the pre-processing of data in scikit-learn. Some questions you can ask about your dataset are as follows:

  • Are there missing values in your dataset?
  • Are there outliers (points far away from the others) in your set?
  • What are the variables...

Creating sample data for toy analysis

If possible, use some of your own data for this book, but in the event you cannot, we'll learn how we can use scikit-learn to create toy data. scikit-learn's pseudo, theoretically constructed data is very interesting in its own right.

Getting ready

Very similar to getting built-in datasets, fetching new datasets, and creating sample datasets, the functions that are used follow the naming convention make_*. Just to be clear, this data is purely artificial:

from sklearn import datasets
datasets.make_*?

datasets.make_biclusters
datasets.make_blobs
datasets.make_checkerboard
datasets.make_circles
datasets.make_classification
...

To save typing, import the datasets module as d, and numpy...

Scaling data to the standard normal distribution

A pre-processing step that is recommended is to scale columns to the standard normal. The standard normal is probably the most important distribution in statistics. If you've ever been introduced to statistics, you must have almost certainly seen z-scores. In truth, that's all this recipe is about—transforming our features from their endowed distribution into z-scores.

Getting ready

The act of scaling data is extremely useful. There are a lot of machine learning algorithms, which perform differently (and incorrectly) in the event the features exist at different scales. For example, SVMs perform poorly if the data isn't scaled because they use a distance...

Creating binary features through thresholding

In the last recipe, we looked at transforming our data into the standard normal distribution. Now, we'll talk about another transformation, one that is quite different. Instead of working with the distribution to standardize it, we'll purposely throw away data; if we have good reason, this can be a very smart move. Often, in what is ostensibly continuous data, there are discontinuities that can be determined via binary features.

Additionally, note that in the previous chapter, we turned a classification problem into a regression problem. With thresholding, we can turn a regression problem into a classification problem. This happens in some data science contexts.

Getting ready

...

Working with categorical variables

Categorical variables are a problem. On one hand they provide valuable information; on the other hand, it's probably text—either the actual text or integers corresponding to the text—such as an index in a lookup table.

So, we clearly need to represent our text as integers for the model's sake, but we can't just use the id field or naively represent them. This is because we need to avoid a similar problem to the Creating binary features through thresholding recipe. If we treat data that is continuous, it must be interpreted as continuous.

Getting ready

The Boston dataset won't be useful for this section. While it's useful for feature binarization, it...

Imputing missing values through various strategies

Data imputation is critical in practice, and thankfully there are many ways to deal with it. In this recipe, we'll look at a few of the strategies. However, be aware that there might be other approaches that fit your situation better.

This means scikit-learn comes with the ability to perform fairly common imputations; it will simply apply some transformations to the existing data and fill the NAs. However, if the dataset is missing data, and there's a known reason for this missing data—for example, response times for a server that times out after 100 ms—it might be better to take a statistical approach through other packages, such as the Bayesian treatment via PyMC, hazards models via Lifelines, or something home-grown.

...

A linear model in the presence of outliers

In this recipe, instead of traditional linear regression we will try using the Theil-Sen estimator to deal with some outliers.

Getting ready

First, create the data corresponding to a line with a slope of 2:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

num_points = 100
x_vals = np.arange(num_points)
y_truth = 2 * x_vals
plt.plot(x_vals, y_truth)

Add noise to that data and label it as y_noisy:

y_noisy = y_truth.copy()
#Change y-values of some points in the line
y_noisy[20:40] = y_noisy[20:40] * (-4 * x_vals[20:40]) - 100

plt.title("Noise in y-direction")
plt.xlim([0,100])
plt.scatter(x_vals, y_noisy,marker='x')
...

Putting it all together with pipelines

Now that we've used pipelines and data transformation techniques, we'll walk through a more complicated example that combines several of the previous recipes into a pipeline.

Getting ready

In this section, we'll show off some more of pipeline's power. When we used it earlier to impute missing values, it was only a quick taste; here, we'll chain together multiple pre-processing steps to show how pipelines can remove extra work.

Let's briefly load the iris dataset and seed it with some missing values:

from sklearn.datasets import load_iris
from sklearn.datasets import load_iris
import numpy as np
iris = load_iris()
iris_data = iris.data
mask = np.random.binomial...

Using Gaussian processes for regression

In this recipe, we'll use a Gaussian process for regression. In the linear models section, we will see how representing prior information on the coefficients was possible using Bayesian ridge regression.

With a Gaussian process, it's about the variance and not the mean. However, with a Gaussian process, we assume the mean is 0, so it's the covariance function we'll need to specify.

The basic setup is similar to how a prior can be put on the coefficients in a typical regression problem. With a Gaussian process, a prior can be put on the functional form of the data, and it's the covariance between the data points that is used to model the data, and therefore, must fit the data.

A big advantage of Gaussian processes is that they can predict probabilistically: you can obtain confidence bounds on your predictions. Additionally...

Using SGD for regression

In this recipe, we'll get our first taste of stochastic gradient descent. We'll use it for regression here.

Getting ready

SGD is often an unsung hero in machine learning. Underneath many algorithms, there is SGD doing the work. It's popular due to its simplicity and speed—these are both very good things to have when dealing with a lot of data. The other nice thing about SGD is that while it's at the core of many machine learning algorithms computationally, it does so because it easily describes the process. At the end of the day, we apply some transformation on the data, and then we fit our data to the model with a loss function.

...
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Handle a variety of machine learning tasks effortlessly by leveraging the power of scikit-learn
  • Perform supervised and unsupervised learning with ease, and evaluate the performance of your model
  • Practical, easy to understand recipes aimed at helping you choose the right machine learning algorithm

Description

Python is quickly becoming the go-to language for analysts and data scientists due to its simplicity and flexibility, and within the Python data space, scikit-learn is the unequivocal choice for machine learning. This book includes walk throughs and solutions to the common as well as the not-so-common problems in machine learning, and how scikit-learn can be leveraged to perform various machine learning tasks effectively. The second edition begins with taking you through recipes on evaluating the statistical properties of data and generates synthetic data for machine learning modelling. As you progress through the chapters, you will comes across recipes that will teach you to implement techniques like data pre-processing, linear regression, logistic regression, K-NN, Naïve Bayes, classification, decision trees, Ensembles and much more. Furthermore, you’ll learn to optimize your models with multi-class classification, cross validation, model evaluation and dive deeper in to implementing deep learning with scikit-learn. Along with covering the enhanced features on model section, API and new features like classifiers, regressors and estimators the book also contains recipes on evaluating and fine-tuning the performance of your model. By the end of this book, you will have explored plethora of features offered by scikit-learn for Python to solve any machine learning problem you come across.

Who is this book for?

Data Analysts already familiar with Python but not so much with scikit-learn, who want quick solutions to the common machine learning problems will find this book to be very useful. If you are a Python programmer who wants to take a dive into the world of machine learning in a practical manner, this book will help you too.

What you will learn

  • Build predictive models in minutes by using scikit-learn
  • Understand the differences and relationships between Classification and Regression, two types of Supervised Learning.
  • Use distance metrics to predict in Clustering, a type of Unsupervised Learning
  • Find points with similar characteristics with Nearest Neighbors.
  • Use automation and cross-validation to find a best model and focus on it for a data product
  • Choose among the best algorithm of many or use them together in an ensemble.
  • Create your own estimator with the simple syntax of sklearn
  • Explore the feed-forward neural networks available in scikit-learn

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 16, 2017
Length: 374 pages
Edition : 2nd
Language : English
ISBN-13 : 9781787286382
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Nov 16, 2017
Length: 374 pages
Edition : 2nd
Language : English
ISBN-13 : 9781787286382
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Can$6 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total Can$ 238.97
Python Machine Learning, Second Edition
Can$55.99
scikit-learn Cookbook , Second Edition
Can$55.99
scikit-learn : Machine Learning Simplified
Can$126.99
Total Can$ 238.97 Stars icon

Table of Contents

12 Chapters
High-Performance Machine Learning – NumPy Chevron down icon Chevron up icon
Pre-Model Workflow and Pre-Processing Chevron down icon Chevron up icon
Dimensionality Reduction Chevron down icon Chevron up icon
Linear Models with scikit-learn Chevron down icon Chevron up icon
Linear Models – Logistic Regression Chevron down icon Chevron up icon
Building Models with Distance Metrics Chevron down icon Chevron up icon
Cross-Validation and Post-Model Workflow Chevron down icon Chevron up icon
Support Vector Machines Chevron down icon Chevron up icon
Tree Algorithms and Ensembles Chevron down icon Chevron up icon
Text and Multiclass Classification with scikit-learn Chevron down icon Chevron up icon
Neural Networks Chevron down icon Chevron up icon
Create a Simple Estimator Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.7
(3 Ratings)
5 star 66.7%
4 star 0%
3 star 0%
2 star 0%
1 star 33.3%
Jose Luis Ramirez Nov 28, 2017
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I've been working with python in my AI projects and I came up upon this great book of recipes that I can use quickly and practically in every stage of my developments. It is ease to use right away and it has reference to the enough amount of theory so you don't have to go to search around for extra info. The book introduces neural networks in a simple way and it has robust OOP for more complex AI projects.
Amazon Verified review Amazon
Rex Jones May 03, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Excellent reference for basic modeling in Python using scikit. Julian Avila has created a great reference that I use it in my class. It's easy to incorporate reading assignments from this book.
Amazon Verified review Amazon
Miss S Betts Apr 22, 2020
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Purchase book.Open in cloud reader and start flicking through - and the reader starts displaying the word in a single column down the centre of the screen. Can't find settings to stop it.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.