- First, we state the problem. We are trying to determine the flower-type category from a set of new observations. This is a classification task. The data available includes a target variable, which we have named y. This is a supervised classification problem.
The task of supervised learning involves predicting values of an output variable with a model that trains using input variables and an output variable.
- Next, we choose a model to solve the supervised classification. For now, we will use a support vector classifier. Because of its simplicity and interpretability, it is a commonly used algorithm (interpretable means easy to read into and understand).
- To measure the performance of prediction, we will split the dataset into training and test sets. The training set refers to data we will learn from. The test set is the data we hold out and pretend not to know as we would like to measure the performance of our learning procedure. So, import a function that will split the dataset:
from sklearn.model_selection import train_test_split
- Apply the function to both the observation and target data:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1)
The test size is 0.25 or 25% of the whole dataset. A random state of one fixes the random seed of the function so that you get the same results every time you call the function, which is important for now to reproduce the same results consistently.
- Now load a regularly used estimator, a support vector machine:
from sklearn.svm import SVC
- You have imported a support vector classifier from the svm module. Now create an instance of a linear SVC:
clf = SVC(kernel='linear',random_state=1)
The random state is fixed to reproduce the same results with the same code later.
The supervised models in scikit-learn implement a fit(X, y) method, which trains the model and returns the trained model. X is a subset of the observations, and each element of y corresponds to the target of each observation in X. Here, we fit a model on the training data:
clf.fit(X_train, y_train)
Now, the clf variable is the fitted, or trained, model.
The estimator also has a predict(X) method that returns predictions for several unlabeled observations, X_test, and returns the predicted values, y_pred. Note that the function does not return the estimator. It returns a set of predictions:
y_pred = clf.predict(X_test)
So far, you have done all but the last step. To examine the model performance, load a scorer from the metrics module:
from sklearn.metrics import accuracy_score
With the scorer, compare the predictions with the held-out test targets:
accuracy_score(y_test,y_pred)
0.76315789473684215