Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Data Analysis Cookbook

You're reading from   Practical Data Analysis Cookbook Over 60 practical recipes on data exploration and analysis

Arrow left icon
Product type Paperback
Published in Apr 2016
Publisher
ISBN-13 9781783551668
Length 384 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Tomasz Drabas Tomasz Drabas
Author Profile Icon Tomasz Drabas
Tomasz Drabas
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Preparing the Data 2. Exploring the Data FREE CHAPTER 3. Classification Techniques 4. Clustering Techniques 5. Reducing Dimensions 6. Regression Methods 7. Time Series Techniques 8. Graphs 9. Natural Language Processing 10. Discrete Choice Models 11. Simulations Index

Retrieving HTML pages with pandas

Although not as popular to store large datasets as previous formats, sometimes we find data in a table on a web page. These structures are normally enclosed within the <table> </table> HTML tags. This recipe will show you how to retrieve data from a web page.

Getting ready

In order to execute the following recipe, you need pandas and re modules available. The re module is a regular expressions module for Python and we will use it to clean up the column names. Also, the read_html(...) method of pandas requires html5lib to be present on your computer. If you use the Anaconda distribution of Python, you can do it by issuing the following command from your command line:

conda install html5lib

Otherwise, you can download the source from https://github.com/html5lib/html5lib-python, unzip it, and install the module manually:

cd html5lib-python-parser
python setup.py install

No other prerequisites are required.

How to do it…

pandas makes it very easy to access, retrieve, and parse HTML files. All this can be done in two lines. The retrieve_html.py file contains more code than that and we will discuss it in the next section:

# url to retrieve
url = 'https://en.wikipedia.org/wiki/' + \
      'List_of_airports_by_IATA_code:_A'

# extract the data from the HTML
url_read = pd.read_html(url, header = 0)[0]

How it works…

The read_html(...) method of pandas parses the DOM of the HTML file and retrieves the data from all the tables. It accepts a URL, file, or raw string with HTML tags as the first parameter. In our example, we also specified header = 0 to extract the header from the table. The read_html(...) method returns a list of DataFrame objects, one for each table in the HTML file. The list of airports page from Wikipedia contains only one table so we only retrieve the first element from the returned list of DataFrames. That's it! The list of airports is already in the url_read object.

However, there are two issues with the data retrieved: column names contain whitespaces and separator rows are in the data. As the names can contain all variety of whitespace characters (space, tabulator, and so on), we use the re module:

import re

# regular expression to find any white spaces in a string
space = re.compile(r'\s+')

def fix_string_spaces(columnsToFix):
    '''
        Converts the spaces in the column name to underscore
    '''
    tempColumnNames = [] # list to hold fixed column names

    # loop through all the columns
    for item in columnsToFix:
        # if space is found
        if space.search(item):
            # fix and append to the list
            tempColumnNames \
                .append('_'.join(space.split(item)))
        else:
            # else append the original column name
            tempColumnNames.append(item)

    return tempColumnNames

First, we compile the regular expression that attempts to find at least one space in a word.

Tip

It goes beyond the scope of this book to discuss regular expressions in detail. A good compendium of knowledge on this topic can be found at https://www.packtpub.com/application-development/mastering-python-regular-expressions or in the re module documentation found at https://docs.python.org/3/library/re.html.

The method then loops through all the columns and, if it finds a space in the (space.search(...)) name, it then splits the column name (space.split(...)) into a list. The list is then concatenated using '_' as a separator. If, however, the column name contains no spaces, the original name is appended to the list of column names. To alter the column names in the DataFrame, we use the following code:

url_read.columns = fix_string_spaces(url_read.columns)

If you look at Wikipedia's list of airports table, you can see that it contains separator rows to group IATA names according to the first two letters of the code. All the other columns in the tables are missing. To deal with this issue, we can use DataFrame's .dropna(...) method.

Note

pandas has a couple of methods to deal with NaN (Not a Number) observations. In the Imputing missing observations recipe, we introduce the .fillna(...) method.

The .dropna(...) method drops rows (or columns if we want to) that contain at least one missing observation. However tempting it may be to just use .dropna(...) without any parameters, you would also drop all the legit rows that miss the Daylight Saving Time (DST) or ICAO code. We can, however, specify a threshold. A very rough look at the data leads to a conclusion that some legit records can contain up to two missing variables. The inplace=True parameter removes the data from the original DataFrame instead of creating a copy of the original one and returning a trimmed DataFrame; the default is inplace=False:

url_read.dropna(thresh=2, inplace=True)

Once we remove some rows, the DataFrame index will have holes. We can recreate it using the following code:

url_read.index = range(0,len(url_read))

To print out the top 10 IATA codes with corresponding airport names, we can use the following code:

print(url_read.head(10)[['IATA', 'Airport_name']])

If you want to retrieve more than one column, you put that in the form of a list; in our case, this was ['IATA', 'Airport_name']. The same results can be attained with the following code:

print(url_read[0:10][['IATA', 'Airport_name']])
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime