Chapter 9: Distributed Hyperparameter Tuning with Pachyderm
In Chapter 8, Creating an End-to-End Machine Learning Workflow, we implemented an End-to-End (E2E) Machine Learning (ML) workflow based on a Named-Entity Recognition (NER) pipeline example. This was a multi-step pipeline that included many computational stages, including data cleaning, Part-Of-Speech (POS) tagging, model training, and running the new model against various data. Our goal was to find the main characters in the story, which we successfully achieved.
In this chapter, we will explore various strategies that can be implemented to select optimal parameters for an ML problem. This technique is called hyperparameter tuning or optimization. In the second part of this chapter, we will implement a hyperparameter tuning pipeline based on a house price prediction example.
This chapter includes the following topics:
- Reviewing hyperparameter tuning techniques and strategies
- Creating a hyperparameter tuning...