In supervised learning, we usually deal with a variety of labels. These can be either numbers or words. If they are numbers, then the algorithm can use them directly. However, labels often need to be in a human-readable form. So, people usually label the training data with words.
Label encoding
Getting ready
Label encoding refers to transforming word labels into a numerical form so that algorithms can understand how to operate on them. Let's take a look at how to do this.
How to do it...
Let's see how to carry out label encoding in Python:
- Create a new Python file and import the preprocessing() package:
>> from sklearn import preprocessing
- This package contains various functions that are needed for data preprocessing. To encode labels with a value between 0 and n_classes-1, the preprocessing.LabelEncoder() function can be used. Let's define the label encoder, as follows:
>> label_encoder = preprocessing.LabelEncoder()
- The label_encoder object knows how to understand word labels. Let's create some labels:
>> input_classes = ['audi', 'ford', 'audi', 'toyota', 'ford', 'bmw']
- We are now ready to encode these labels—first, the fit() function is used to fit the label encoder, and then the class mapping encoders are printed:
>> label_encoder.fit(input_classes)
>> print("Class mapping: ")
>> for i, item in enumerate(label_encoder.classes_):
... print(item, "-->", i)
- Run the code, and you will see the following output on your Terminal:
Class mapping:
audi --> 0
bmw --> 1
ford --> 2
toyota --> 3
- As shown in the preceding output, the words have been transformed into zero-indexed numbers. Now, when you encounter a set of labels, you can simply transform them, as follows:
>> labels = ['toyota', 'ford', 'audi']
>> encoded_labels = label_encoder.transform(labels)
>> print("Labels =", labels)
>> print("Encoded labels =", list(encoded_labels))
Here is the output that you'll see on your Terminal:
Labels = ['toyota', 'ford', 'audi']
Encoded labels = [3, 2, 0]
- This is way easier than manually maintaining mapping between words and numbers. You can check the correctness by transforming numbers back into word labels:
>> encoded_labels = [2, 1, 0, 3, 1]
>> decoded_labels = label_encoder.inverse_transform(encoded_labels)
>> print("Encoded labels =", encoded_labels)
>> print("Decoded labels =", list(decoded_labels))
To transform labels back to their original encoding, the inverse_transform() function has been applied. Here is the output:
Encoded labels = [2, 1, 0, 3, 1]
Decoded labels = ['ford', 'bmw', 'audi', 'toyota', 'bmw']
As you can see, the mapping is preserved perfectly.
How it works...
In this recipe, we used the preprocessing.LabelEncoder() function to transform word labels into numerical form. To do this, we first set up a series of labels to as many car brands. We then turned these labels into numerical values. Finally, to verify the operation of the procedure, we printed the values corresponding to each class labeled.
There's more...
In the last two recipes, Label encoding and One-hot encoding, we have seen how to transform data. Both methods are suitable for dealing with categorical data. But what are the pros and cons of the two methodologies? Let's take a look:
- Label encoding can transform categorical data into numeric data, but the imposed ordinality creates problems if the obtained values are submitted to mathematical operations.
- One-hot encoding has the advantage that the result is binary rather than ordinal, and that everything is in an orthogonal vector space. The disadvantage is that for high cardinality, the feature space can explode.
See also
- Scikit-learn's official documentation on the sklearn.preprocessing.LabelEncoder() function: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html.