Summary
In this chapter, we covered a lot of techniques for preparing our data to be consumed by machine learning algorithms.
One of these techniques is imputation, which is useful for data that contains null values. For data that contains unexpected values, we can apply outlier handling.
By using binning, we can categorize numeric data. If our numeric data is not correctly distributed, we can remove skewness by applying variable transformations, using methods we looked at in the previous chapters.
On the other hand, one-hot encoding allows us to separate the values from a column into multiple Boolean columns. We can split one value that contains lots of data into multiple values by using feature split. Finally, we learned how to scale our data by using multiple methods.
Now that you know about all these techniques, you can make your first steps into machine learning.
In the next chapter, we will learn how to use the data we've prepared so far to create models using...