Distributed and GPU-Based Learning with LightGBM
This chapter looks at training LightGBM models on distributed computing clusters and GPUs. Distributed computing can significantly speed up training workloads and enable the training of much larger datasets than the memory available on a single machine. We’ll look at leveraging Dask for distributed computing and LightGBM’s support for GPU-based training.
The topics covered in the chapter are as follows:
- Distributed learning with LightGBM and Dask
- GPU training for LightGBM