Part of good web scraping etiquette is making sure you are not putting too much load on your target web server. This means limiting the number of requests you make within a certain period of time. For smaller servers, this is especially true, as they have a much more limited pool of resources. As a good rule of thumb, you should only access the same web page as often as you think it will change. For example, if you were looking at daily deals, you would probably only need to scrape once per day. As for scraping multiple pages from the same website, you should first follow the Crawl-Delay in a robots.txt file. If there is no Crawl-Delay specified, then you should manually delay your requests by one second after every page.
There are many different ways to incorporate delays into your crawler, from manually putting your program to sleep to using external...