Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
scikit-learn Cookbook , Second Edition

You're reading from   scikit-learn Cookbook , Second Edition Over 80 recipes for machine learning in Python with scikit-learn

Arrow left icon
Product type Paperback
Published in Nov 2017
Publisher Packt
ISBN-13 9781787286382
Length 374 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Trent Hauck Trent Hauck
Author Profile Icon Trent Hauck
Trent Hauck
Julian Avila Julian Avila
Author Profile Icon Julian Avila
Julian Avila
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. High-Performance Machine Learning – NumPy FREE CHAPTER 2. Pre-Model Workflow and Pre-Processing 3. Dimensionality Reduction 4. Linear Models with scikit-learn 5. Linear Models – Logistic Regression 6. Building Models with Distance Metrics 7. Cross-Validation and Post-Model Workflow 8. Support Vector Machines 9. Tree Algorithms and Ensembles 10. Text and Multiclass Classification with scikit-learn 11. Neural Networks 12. Create a Simple Estimator

A minimal machine learning recipe – SVM classification

Machine learning is all about making predictions. To make predictions, we will:

  • State the problem to be solved
  • Choose a model to solve the problem
  • Train the model
  • Make predictions
  • Measure how well the model performed

Getting ready

Back to the iris example, we now store the first two features (columns) of the observations as X and the target as y, a convention in the machine learning community:

X = iris.data[:, :2]  
y = iris.target

How to do it...

  1. First, we state the problem. We are trying to determine the flower-type category from a set of new observations. This is a classification task. The data available includes a target variable, which we have named y. This is a supervised classification problem.
The task of supervised learning involves predicting values of an output variable with a model that trains using input variables and an output variable.
  1. Next, we choose a model to solve the supervised classification. For now, we will use a support vector classifier. Because of its simplicity and interpretability, it is a commonly used algorithm (interpretable means easy to read into and understand).
  2. To measure the performance of prediction, we will split the dataset into training and test sets. The training set refers to data we will learn from. The test set is the data we hold out and pretend not to know as we would like to measure the performance of our learning procedure. So, import a function that will split the dataset:
from sklearn.model_selection import train_test_split
  1. Apply the function to both the observation and target data:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)

The test size is 0.25 or 25% of the whole dataset. A random state of one fixes the random seed of the function so that you get the same results every time you call the function, which is important for now to reproduce the same results consistently.

  1. Now load a regularly used estimator, a support vector machine:
from sklearn.svm import SVC
  1. You have imported a support vector classifier from the svm module. Now create an instance of a linear SVC:
clf = SVC(kernel='linear',random_state=1)

The random state is fixed to reproduce the same results with the same code later.

The supervised models in scikit-learn implement a fit(X, y) method, which trains the model and returns the trained model. X is a subset of the observations, and each element of y corresponds to the target of each observation in X. Here, we fit a model on the training data:

clf.fit(X_train, y_train)

Now, the clf variable is the fitted, or trained, model.

The estimator also has a predict(X) method that returns predictions for several unlabeled observations, X_test, and returns the predicted values, y_pred. Note that the function does not return the estimator. It returns a set of predictions:

y_pred = clf.predict(X_test)

So far, you have done all but the last step. To examine the model performance, load a scorer from the metrics module:

from sklearn.metrics import accuracy_score

With the scorer, compare the predictions with the held-out test targets:

accuracy_score(y_test,y_pred)

0.76315789473684215

How it works...

Without knowing very much about the details of support vector machines, we have implemented a predictive model. To perform machine learning, we held out one-fourth of the data and examined how the SVC performed on that data. In the end, we obtained a number that measures accuracy, or how the model performed.

There's more...

To summarize, we will do all the steps with a different algorithm, logistic regression:

  1. First, import LogisticRegression:
from sklearn.linear_model import LogisticRegression
  1. Then write a program with the modeling steps:
    1. Split the data into training and testing sets.
    2. Fit the logistic regression model.
    3. Predict using the test observations.
    4. Measure the accuracy of the predictions with y_test versus y_pred:
import matplotlib.pyplot as plt
from sklearn import datasets

from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = iris.data[:, :2] #load the iris data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)

#train the model
clf = LogisticRegression(random_state = 1)
clf.fit(X_train, y_train)

#predict with Logistic Regression
y_pred = clf.predict(X_test)

#examine the model accuracy
accuracy_score(y_test,y_pred)

0.60526315789473684

This number is lower; yet we cannot make any conclusions comparing the two models, SVC and logistic regression classification. We cannot compare them, because we were not supposed to look at the test set for our model. If we made a choice between SVC and logistic regression, the choice would be part of our model as well, so the test set cannot be involved in the choice. Cross-validation, which we will look at next, is a way to choose between models.

You have been reading a chapter from
scikit-learn Cookbook , Second Edition - Second Edition
Published in: Nov 2017
Publisher: Packt
ISBN-13: 9781787286382
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image