Scheduling notebooks in Vertex AI
Jupyter Notebook environments are great for doing some initial experiments. But when it comes to launching long-running jobs, multiple training trials with different input parameters (such as hyperparameter tuning jobs), or adding accelerators to training jobs, we usually copy our code into a Python file and launch experiments using custom Docker containers or managed pipelines such as Vertex AI pipelines. Considering this situation and to minimize the duplication of efforts, Vertex AI-managed notebook instances provide us with the functionality of scheduling notebooks on an ad hoc or recurring basis. This feature allows us to execute our scheduled notebook cell by cell on Vertex AI. It provides us with the flexibility to seamlessly scale our processing power and choose suitable hardware for the task. Additionally, we can pass different input parameters for experimentation purposes.
Configuring notebook executions
Let’s try to configure...