When it comes to running your application, you'll need to decide how your job is going to run. In the previous section,when we submitted our job from the client node, our driver process was running on the same machine and executors running on the cluster worker nodes. Spark is not restricted to only this mode of execution. It provides three execution modes:
- Local mode
- Client mode
- Cluster mode
In this section, we'll discuss each of them in detail and how you can use spark-submit to configure them.