Removing duplicate data
Duplicate data can be very misleading and can lead us to wrong conclusions about patterns and the distribution of our data. Therefore, it is very important to address duplicate data within our dataset before embarking on any analysis. Performing a quick duplicate check is good practice in EDA. When working with tabular datasets, we can identify duplicate values in specific columns or duplicate records (across multiple columns). A good understanding of our dataset and the domain will give us insight into what should be considered a duplicate. In pandas
, the drop_duplicates
method can help us with handling duplicate values or records within our dataset.
Getting ready
We will work with the full Marketing Campaign data for this recipe.
How to do it…
We will remove duplicate data using the pandas
library:
- Import the
pandas
library:import pandas as pd
- Load the
.csv
file into a dataframe usingread_csv
. Then, subset the dataframe to include only relevant columns:marketing_data = pd.read_csv("data/marketing_campaign.csv")
marketing_data = marketing_data[['Education','Marital_Status','Kidhome', 'Teenhome']]
- Inspect the data. Check the first few rows. Also, check the number of columns and rows:
marketing_data.head()
Education Marital_Status Kidhome Teenhome
0 Graduation Single 0 0
1 Graduation Single 1 1
2 Graduation Together 0 0
3 Graduation Together 1 0
4 PhD Married 1 0
marketing_data.shape
(2240, 4)
- Remove duplicates across the four columns in our dataset:
marketing_data_duplicate = marketing_data.drop_duplicates()
- Inspect the result:
marketing_data_duplicate.head()
Education Marital_Status Kidhome Teenhome
0 Graduation Single 0 0
1 Graduation Single 1 1
2 Graduation Together 0 0
3 Graduation Together 1 0
4 PhD Married 1 0
marketing_data_duplicate.shape
(135,4)
We have now removed duplicates from our dataset.
How it works...
We refer to pandas
as pd
in step 1. In step 2, we use read_csv
to load the .csv
file into a pandas
dataframe and call it marketing_data
. We also subset the dataframe to include only four relevant columns. In step 3, we inspect the dataset using head()
to see the first five rows in the dataset. Using the shape
method, we get a sense of the number of rows and columns from the tuple respectively.
In step 4, we use the drop_duplicates
method to remove duplicate rows that appear in the four columns of our dataset. We save the result in the marketing_data_duplicate
variable. In step 5, we inspect the result using the head
method to see the first five rows. We also leverage the shape
method to inspect the number of rows and columns. We can see that the rows have decreased significantly from our original shape.
There’s more...
The drop_duplicates
method gives some flexibility around dropping duplicates based on a subset of columns. By supplying the list of the subset columns as the first argument, we can drop all rows that contain duplicates based on those subset columns. This is useful when we have several columns and only a few key columns contain duplicate information. Also, it allows us to keep instances of duplicates, using the keep
parameter. With the keep
parameter, we can specify whether we want to keep the “first” or “last” instance or drop all instances of the duplicate information. By default, the method keeps the first instance.