Summary
We have learned in this chapter about how we can improve the performance of our training pipelines for deep learning algorithms, using distributed learning with Horovod and the native TensorFlow for Spark in Azure Databricks. We have discussed the core algorithms that drive the capability of being able to effectively distribute key operations such as gradient descent and model weights update, how this is implemented in the horovod
library, included with Azure Databricks Runtime for Machine Learning, and how we can use the native support now available for Spark in the TensorFlow framework for distributed training of deep learning models.
This chapter concludes this book. Hopefully, it enabled you to learn in an easier way the incredible number of features available in Azure Databricks for data engineering and data science. As mentioned before, most of the code examples are modifications of the official libraries or are taken from the Azure Databricks documentation in order...