Optimizing the learning rate with PyTorch Forecasting
In this recipe, we show how to optimize the learning rate of a model based on PyTorch Forecasting.
Getting ready
The learning rate is a cornerstone parameter of all deep learning methods. As the name implies, it controls how quickly the learning process of the network is. In this recipe, we’ll use the same setup as the previous recipe:
datamodule = GlobalDataModule(data=dataset, n_lags=N_LAGS, horizon=HORIZON, batch_size=32, test_size=0.2) datamodule.setup()
We’ll also use N-BEATS as an example. However, the process is identical for all models based on PyTorch Forecasting.
How to do it…
The optimization of the learning rate can be carried out using the Tuner
class from PyTorch Lightning. Here is an example with N-BEATS:
from lightning.pytorch.tuner import Tuner import lightning.pytorch as pl from...