In this chapter, we will learn to containerize our scraper, getting it ready for the real world by starting to package it for real, modern, cloud-enabled operations. This will involve packaging the different elements of the scraper (API, scraper, backend storage) as Docker containers that can be run locally or in the cloud. We will also examine implementing the scraper as a microservice that can be independently scaled.
Much of the focus will be upon using Docker to create our containerized scraper. Docker provides us a convenient and easy means of packaging the various components of the scraper as a service (the API, the scraper itself, and other backends such as Elasticsearch and RabbitMQ). By containerizing these components using Docker, we can easily run the containers locally, orchestrate the different containers making up the services, and also conveniently...