Handling duplicates
Duplicates show up in data for many reasons, but sometimes it's really hard to spot them. In this recipe, we will show you how to spot the most common ones and handle them using Spark.
Getting ready
To execute this recipe, you need to have a working Spark environment. If you do not have one, you might want to go back to Chapter 1, Installing and Configuring Spark, and follow the recipes you will find there.Â
We will work on the dataset from the introduction. All the code that you will need in this chapter can be found in the GitHub repository we set up for the book:Â http://bit.ly/2ArlBck. Go to Chapter04
and open the 4.Preparing data for modeling.ipynb notebook.Â
No other prerequisites are required.
How to do it...
A duplicate is a record in your dataset that appears more than once. It is an exact copy. Spark DataFrames have a convenience method to remove the duplicated rows, the .dropDuplicates()
transformation:
- Check whether any rows are duplicated, as follows:Â
dirty_data...