Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Go Web Scraping Quick Start Guide

You're reading from   Go Web Scraping Quick Start Guide Implement the power of Go to scrape and crawl data from the web

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781789615708
Length 132 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Vincent Smith Vincent Smith
Author Profile Icon Vincent Smith
Vincent Smith
Arrow right icon
View More author details
Toc

Why do you need a web scraper?

There are many different use cases where you might need to build a web scraper. All cases center around the fact that information on the internet is often disparate, but can be very valuable when collected into one single package. Often, in these cases, the person collecting the information does not have a working or business relationship with the producers of the data, meaning they cannot request the information to be packaged and delivered to them. Because of the lack of this relationship, the one who needs the data has to rely on their own means to gather the information.

Search engines

One well-known use case for web scraping is indexing websites for the purpose of building a search engine. In this case, a web scraper would visit different websites and follow references to other websites in order to discover all of the content available on the internet. By collecting some of the content from the pages, you could respond to search queries by matching the terms to the contents of the pages you have collected. You could also suggest similar pages if you track how pages are linked together, and rank the most important pages by the number of connections they have to other sites.

Googlebot is the most famous example of a web scraper used to build a search engine. It is the first step in building the search engine as it downloads, indexes, and ranks each page on a website. It will also follow links to other websites, which is how it is able to index a substantial portion of the internet. According to Googlebot's documentation, the scraper attempts to reach each web page every few seconds, which requires them to reach estimates of well into billions of pages per day!

If your goal is to build a search engine, albeit on a much smaller scale, you will find enough tools in this book to collect the information you need. This book will not, however, cover indexing and ranking pages to provide relevant search results.

Price comparison

Another known use case is to find specific products or services sold through various websites and track their prices. You would be able to see who sells the item, who has the lowest price, or when it is most likely to be in stock. You might even be interested in similar products from different sources. Having a web scraper periodically visit websites to monitor these products and services would be easily solve this problem. This is very similar to tracking prices for flights, hotels, and rental cars as well.

Sites like camelcamelcamel (https://camelcamelcamel.com/) build their business model around such a case. According to their blog post explaining how their system works, they actively collect pricing information from multiple retailers every half hour to every few hours, covering millions of products. This allows users to view pricing differences across multiple platforms, as well as get notified if the price of an item drops.

This type of web scraper requires very careful parsing of the web pages to extract only the content that is relevant. In later chapters, you will learn how to extract information from HTML pages in order to collect this information.

Building datasets

Data scientists often need hundreds of thousands of data points in order to build, train, and test machine learning models. In some cases, this data is already pre-packaged and ready for consumption. Most of the time, the scientist would need to venture out on their own and build a custom dataset. This is often done by building a web scraper to collect raw data from various sources of interest, and refining it so it can be processed later on. These web scrapers also need to periodically collect fresh data to update their predictive models with the most relevant information.

A common use case that data scientists run into is determining how people feel about a specific subject, known as sentiment analysis. Through this process, a company could look for discussions surrounding one of their products, or their overall presence, and gather a general consensus. In order to do this, the model must be trained on what a positive comment and a negative comment are, which could take thousands of individual comments in order to make a well-balanced training set. Building a web scraper to collect comments from relevant forums, reviews, and social media sites would be helpful in constructing such a dataset.

These are just a few examples of web scrapers that drive large business such as Google, Mozenda, and Cheapflights.com. There are also companies that will scrape the web for whatever available data you need, for a fee. In order to run scrapers at such a large scale, you would need to use a language that is fast, scalable, and easy to maintain.

You have been reading a chapter from
Go Web Scraping Quick Start Guide
Published in: Jan 2019
Publisher: Packt
ISBN-13: 9781789615708
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image