To get the most out of this book
To follow along with the examples in this chapter, you will need to have the following:
- Install Docker Desktop as per the instructions at https://www.docker.com/products/docker-desktop/.
If on Mac, make sure to choose the binary associated with your chip (Intel Chip or Apple Chip).
- Install Git as per the instructions at https://git-scm.com/book/en/v2/Getting-Started-Installing-Git.
- Clone the book’s repository locally by running the following:
git clone https://github.com/PacktPublishing/Data-Engineering-with-Databricks-Cookbook.git
- Download and build the Docker images for the two nodes, one master spark cluster, and the JupyterLab notebook environment by running the following command in the cloned repository’s root folder:
$ sh build.sh
Note
This may take several minutes the first time since it has to download and install Spark and all other supporting libraries on the base images.
- Start the local Apache Spark and JupyterLab notebook environment by running
docker-compose
from the root folder of the cloned repository:$ docker-compose up
This docker-compose
file is creating a multi-container application that consists of the following services:
- ZooKeeper: A service that provides coordination and configuration management for distributed systems.
- Kafka: A service that provides a distributed streaming platform for publishing and subscribing to streams of data. It depends on ZooKeeper and uses port
9092
. It allows plaintext listeners and has some custom configuration options. - JupyterLab: A service that provides an interactive web-based environment for data science and machine learning. It uses ports
8888
and4040
and shares a local volume with the other services. It has a custom image, which includes Spark 3.4.1. - spark-master: A service that acts as the master node for a Spark cluster. It uses ports
8080
and7077
and shares a local volume with the other services. It has a custom image, which includes Spark 3.4.1. - spark-worker-1 and spark-worker-2: Two services that act as worker nodes for the Spark cluster. They depend on spark-master and use port
8081
. They have custom images, which include Spark 3.4.1 and some environment variables to specify the worker cores and memory.
To run this docker-compose
file, you need to have the following minimum system requirements:
- Docker Engine version 18.02.0+ and Docker Compose version 1.25.5+
- At least 6 GB of RAM and 10 GB of disk space
- A Linux, Mac, or Windows operating system that supports Docker
The system requirements in order to follow the recipes are as follows:
Software/Hardware covered in the book |
OS requirements |
Docker Engine version 18.02.0+ |
Windows, Mac OS X, and Linux (any) |
Docker Compose version 1.25.5+ |
|
Docker Desktop |
|
Git |
If you are using the digital version of this book, we advise you to type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.