In this chapter, we reviewed a number of different techniques to ensure that we and our web scrapers are protected while browsing the internet. By using VPS, we are protecting our personal assets from malicious activity and discoverability on the internet. Proxies also help restrict information about the source of internet traffic, providing a layer of anonymity. VPNs add an extra layer of security over proxies by creating an encrypted tunnel for our data to flow through. Finally, creating whitelists and blacklists ensures that your scraper will not venture too deep into uncharted and undesirable places.
In Chapter 7, Scraping with Concurrency, we will look at how to use concurrency in order to increase the scale of our web scraper without the added cost of incorporating extra resources.