Which user agent you use can have an effect on the success of your scraper. Some websites will flat out refuse to serve content to specific user agents. This can be because the user agent is identified as a scraper that is banned, or the user agent is for an unsupported browser (namely Internet Explorer 6).
Another reason for control over the scraper is that content may be rendered differently by the web server depending on the specified user agent. This is currently common for mobile sites, but it can also be used for desktops, to do things such as delivering simpler content for older browsers.
Therefore, it can be useful to set the user agent to other values than the defaults. Scrapy defaults to a user agent named scrapybot. This can be configured by using the BOT_NAME parameter. If you use Scrapy projects, Scrapy will set the agent to the name of your...