Summary
In this chapter, we have learned how to run a DL pipeline in different execution environments (local or remote Databricks clusters) using either local source code or GitHub project repository code. This is critical not just for reproducibility and flexibility in executing a DL pipeline, but also provides much better productivity and future automation possibility using CI/CD tools. The power to run one or multiple steps of a DL pipeline in remote resource-rich environments gives us the speed to execute large-scale computation and data-intensive jobs that are typically seen in production-quality DL model training and fine-tuning. This allows us to do hyperparameter tuning or cross-validation of a DL model if necessary. We will start to learn how to run large-scale hyperparameter tuning in the next chapter as our natural next step.