Summary
In this chapter, you have just created your first ML platform. You have configured the ODH components via the ODH Kubernetes operator. You have seen how a data engineer persona will use JupyterHub to provision the Jupyter notebook and the Apache Spark cluster instance while the platform provides the provisioning of the environments automatically. You have also seen how the platform enables standardization of the operating environment via the container images, which bring consistency and security. You have seen how a data engineer could run Apache Spark jobs from the Jupyter notebook.
All these capabilities allow the data engineer to work autonomously and in a self-serving fashion. You have seen that all these components were available autonomously and on-demand. The elastic and self-serving nature of the platform will allow teams to be more productive and agile while responding to the ever-changing requirements of the data and the ML world.
In the next chapter, you will...