In this section, we will explore Scrapy for deploying spiders and crawlers in the cloud.
Working with Scrapy in the cloud
Scrapinghub
The first step is register in the Scrapinghub service, which can be done at the following URL: https://app.scrapinghub.com/account/login/.
Scrapy Cloud is a platform for running web crawlers and spiders, where spiders are executing in cloud servers and scale on demand: https://scrapinghub.com/scrapy-cloud.
To deploy projects into Scrapy Cloud, you will need the Scrapinghub command-line client, called shub, and it can be installed with the pip install command. You can check if you have the latest version:
$ pip install shub --upgrade
The next step is to create a project in Scrapinghub and deploy...