Parsing robots.txt and sitemap.xml
In this section, we will introduce robots.txt
- and sitemap.xml
-related information and follow the instructions or resources available in those two files. We mentioned them in the Data-finding techniques used in web pages section of Chapter 1. In general, we can dive deep into the pages, or the directory with pages, of websites and find data or manage missing or hidden links using the robots.txt
and sitemap.xml
files.
The robots.txt file
The robots.txt
file, or the Robots Exclusion Protocol, is a web-based standard or protocol used by websites to exchange information with automated scripts. robots.txt
carries instructions regarding site-based links or resources to web robots (crawlers, spiders, web wanderers, or web bots), and uses directives such as Allow
, Disallow
, SiteMap
, Crawl-delay
, and User-agent
to direct robots’ behavior.
We can find robots.txt
by adding robots.txt
to the main URL. For example, robots.txt
for https://www.python...