Most of the pages on a website are free to be accessed by web scrapers and bots. Some of the reasons for allowing this are in order to be indexed by search engines or to allow pages to be discovered by content curators. Googlebot is one of the tools that most websites would be more than happy to give access to their content. However, there are some sites that may not want everything to show up in a Google search result. Imagine if you could google a person and instantly obtain all of their social media profiles, complete with contact information and address. This would be bad news for the person, and certainly not a good privacy policy for the company hosting the site. In order to control access to different parts of a website, you would configure a robots.txt file.
The robots.txt file is typically hosted at the root of the website in the /robots.txt...