Duplicates show up in data for many reasons, but sometimes it's really hard to spot them. In this recipe, we will show you how to spot the most common ones and handle them using Spark.
Handling duplicates
Getting ready
To execute this recipe, you need to have a working Spark environment. If you do not have one, you might want to go back to Chapter 1, Installing and Configuring Spark, and follow the recipes you will find there.Â
We will work on the dataset from the introduction. All the code that you will need in this chapter can be found in the GitHub repository we set up for the book: http://bit.ly/2ArlBck. Go to Chapter04 and open the 4.Preparing data for modeling.ipynb notebook.Â
No other prerequisites...