Introduction
The previous chapter covered how to create a successful data wrangling pipeline. In this chapter, we will build a web scraper that can be used by a data wrangling professional in their daily tasks using all of the techniques that we have learned so far. This chapter builds on the foundation of BeautifulSoup
and introduces various methods for scraping a web page and using an API to gather data.
In today's connected world, one of the most valued and widely used skills for a data wrangling professional is the ability to extract and read data from web pages and databases hosted on the web. Most organizations host data on the cloud (public or private), and the majority of web microservices these days provide some kind of API for external users to access data. Let's take a look at the following diagram:
As we can see in the diagram, to fetch data from a web server or a database...