Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Data Preprocessing in Python

You're reading from   Hands-On Data Preprocessing in Python Learn how to effectively prepare data for successful data analytics

Arrow left icon
Product type Paperback
Published in Jan 2022
Publisher Packt
ISBN-13 9781801072137
Length 602 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Roy Jafari Roy Jafari
Author Profile Icon Roy Jafari
Roy Jafari
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Preface 1. Part 1:Technical Needs
2. Chapter 1: Review of the Core Modules of NumPy and Pandas FREE CHAPTER 3. Chapter 2: Review of Another Core Module – Matplotlib 4. Chapter 3: Data – What Is It Really? 5. Chapter 4: Databases 6. Part 2: Analytic Goals
7. Chapter 5: Data Visualization 8. Chapter 6: Prediction 9. Chapter 7: Classification 10. Chapter 8: Clustering Analysis 11. Part 3: The Preprocessing
12. Chapter 9: Data Cleaning Level I – Cleaning Up the Table 13. Chapter 10: Data Cleaning Level II – Unpacking, Restructuring, and Reformulating the Table 14. Chapter 11: Data Cleaning Level III – Missing Values, Outliers, and Errors 15. Chapter 12: Data Fusion and Data Integration 16. Chapter 13: Data Reduction 17. Chapter 14: Data Transformation and Massaging 18. Part 4: Case Studies
19. Chapter 15: Case Study 1 – Mental Health in Tech 20. Chapter 16: Case Study 2 – Predicting COVID-19 Hospitalizations 21. Chapter 17: Case Study 3: United States Counties Clustering Analysis 22. Chapter 18: Summary, Practice Case Studies, and Conclusions 23. Other Books You May Enjoy

Exercises

  1. Use the adult.csv dataset and run the code shown in the following screenshots. Then, answer the questions that follow:
    Figure 1.48 – Exercise 1

    Figure 1.48 – Exercise 1

    a) Use the output to answer what is the difference in behavior of .loc and .iloc when it comes to slicing?

    b) Without running, but just by looking at the data, what will be the output of adult_df.loc['10000':'10003', 'relationship':'sex']?

    c) Without running, but just by looking at the data, what will be the output of adult_df.iloc[0:3, 7:9]?

  2. Use Pandas to read adult.csv into adult_df and then use the .groupby() function to run the following code and create the multi-index series mlt_sr:
    import pandas as pd
    adult_df = pd.read_csv('adult.csv')
    mlt_seris =adult_df.groupby(['race','sex','income']).fnlwgt.mean()
    mlt_seris  

    a) Now that you have created a multi-index series, run the following code, study the outputs, and answer the following questions:

    Run the following code first and then answer this question: When we use .iloc[] for a multi-index series or DataFrame, what should we expect?

    print(mlt_seris.iloc[0])
    print(mlt_seris.iloc[1])
    print(mlt_seris.iloc[2])

    b) Run the following code first and then answer this question: When we use .loc[] to access the data of one of the innermost index levels of the multi-index series, what should we expect?

    mlt_seris.loc['Other']

    c) Run the following code first and then answer this question: When we use .loc[] to access the data of one of the non-innermost index levels of a multi-index series, what should we expect?

    When you run either line of the following code, you will get an error, and that is the point of this question. Study the error and try to answer the question:

    mlt_seris.loc['Other']
    mlt_seris.loc['<=50K']

    d) Run the following code first and then answer this question: How does the use of .loc[] or .iloc[] differ when working with a multi-index series or a DataFrame?

    print(mlt_seris.loc['Other']['Female']['<=50K'])
    print(mlt_seris.iloc[12])
  3. For this exercise, you need to use a new dataset: billboard.csv. Visit https://www.billboard.com/charts/hot-100 and see the latest song rankings of the day. This dataset presents information and rankings for 317 song tracks in 80 columns. The first four columns are artist, track, time, and date_e. The first columns are intuitive descriptions of song tracks. The date_e column shows the date that the songs entered the hot 100 list. The rest of the 76 columns are song rankings at the end of each week from "w1" to "w76". Download and read this dataset using Pandas and answer the following questions:

    a) Write one line of code that gives you a great idea of how many null values each column has. If any columns have no non-null values, drop them.

    b) With a for loop, draw and study the values in each of the remaining W columns.

    c) The dataset is in wide format. Use an appropriate function to switch to a long format and name the transformed DataFrame mlt_df.

    d) Write code that shows mlt_df every 1,200 rows.

    e) Run the following code first and answer this question: Could this also have been done by using Boolean masking?

    mlt_df.query('artist == "Spears, Britney"')

    f) Use either the approach in e or the Boolean mask to extract all the unique songs that Britney Spears has in this dataset.

    g) In mlt_df, show all of the weeks when the song "Oops!.. I Did It Again" was in the top 100.

  4. We will use LaqnData.csv for this exercise. Each row of this dataset shows an hourly measurement recording of one of the following five air pollutants: NO, NO2, NOX, PM10, and PM2.5. The data was collected in a location in London for the entirety of the year 2017. Read the data using Pandas and perform the following tasks:

    a) The dataset has six columns. Three of them, named 'Site', 'Units', and 'Provisional or Ratified' are not adding any informational values as they are the same across the whole dataset. Use the following code to drop them:

    air_df.drop(columns=['Site','Units','Provisional or Ratified'], inplace=True)

    b) The dataset is in a long format. Apply the appropriate function to switch it to the wide format. Name the transformed Dataframe pvt_df.

    c) Draw and study the histogram and boxplots for columns of pvt_df.

  5. We will continue working with LaqnData.csv:

    a) Run the following code, see its output, and then study the code to answer what each line of this code does:

    air_df = pd.read_csv('LaqnData.csv')
    air_df.drop(columns=['Site','Units','Provisional or Ratified'], inplace=True)
    datetime_df = air_df.ReadingDateTime.str.split(' ',expand=True)
    datetime_df.columns = ['Date','Time']
    date_df = datetime_df.Date.str.split('/',expand=True)
    date_df.columns = ['Day','Month','Year']
    air_df = air_df.join(date_df).join(datetime_df.Time).drop(columns=['ReadingDateTime','Year'])
    air_df

    b) Run the following code, see its output, and then study the code to answer what this line of code does:

    air_df = air_df.set_index(['Month','Day','Time','Species'])
    air_df

    c) Run the following code, see its output, and then study the code to answer what this line of code does:

    air_df.unstack()

    d) Compare the output of the preceding code with pvt_df from Exercise 4. Are they the same?

    e) Explain what the differences and similarities are between the pair .melt()/.pivot() and the pair .stack()/.unstack()?

    f) If you were to choose one counterpart for .melt() between .stack()/.unstack(), which one would you choose?

You have been reading a chapter from
Hands-On Data Preprocessing in Python
Published in: Jan 2022
Publisher: Packt
ISBN-13: 9781801072137
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime