Machine learning is all about making predictions. To make predictions, we will:
- State the problem to be solved
- Choose a model to solve the problem
- Train the model
- Make predictions
- Measure how well the model performed
Machine learning is all about making predictions. To make predictions, we will:
Back to the iris example, we now store the first two features (columns) of the observations as X and the target as y, a convention in the machine learning community:
X = iris.data[:, :2]
y = iris.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)
The test size is 0.25 or 25% of the whole dataset. A random state of one fixes the random seed of the function so that you get the same results every time you call the function, which is important for now to reproduce the same results consistently.
from sklearn.svm import SVC
clf = SVC(kernel='linear',random_state=1)
The random state is fixed to reproduce the same results with the same code later.
The supervised models in scikit-learn implement a fit(X, y) method, which trains the model and returns the trained model. X is a subset of the observations, and each element of y corresponds to the target of each observation in X. Here, we fit a model on the training data:
clf.fit(X_train, y_train)
Now, the clf variable is the fitted, or trained, model.
The estimator also has a predict(X) method that returns predictions for several unlabeled observations, X_test, and returns the predicted values, y_pred. Note that the function does not return the estimator. It returns a set of predictions:
y_pred = clf.predict(X_test)
So far, you have done all but the last step. To examine the model performance, load a scorer from the metrics module:
from sklearn.metrics import accuracy_score
With the scorer, compare the predictions with the held-out test targets:
accuracy_score(y_test,y_pred)
0.76315789473684215
Without knowing very much about the details of support vector machines, we have implemented a predictive model. To perform machine learning, we held out one-fourth of the data and examined how the SVC performed on that data. In the end, we obtained a number that measures accuracy, or how the model performed.
To summarize, we will do all the steps with a different algorithm, logistic regression:
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X = iris.data[:, :2] #load the iris data
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)
#train the model
clf = LogisticRegression(random_state = 1)
clf.fit(X_train, y_train)
#predict with Logistic Regression
y_pred = clf.predict(X_test)
#examine the model accuracy
accuracy_score(y_test,y_pred)
0.60526315789473684
This number is lower; yet we cannot make any conclusions comparing the two models, SVC and logistic regression classification. We cannot compare them, because we were not supposed to look at the test set for our model. If we made a choice between SVC and logistic regression, the choice would be part of our model as well, so the test set cannot be involved in the choice. Cross-validation, which we will look at next, is a way to choose between models.