Chapter 9. Design Strategies and Case Studies
With the possible exception of data munging, evaluating is probably what machine learning scientists spend most of their time doing. Staring at lists of numbers and graphs, watching hopefully as their models run, and trying earnestly to make sense of their output. Evaluation is a cyclical process; we run models, evaluate the results, and plug in new parameters, each time hoping that this will result in a performance gain. Our work becomes more enjoyable and productive as we increase the efficiency of each evaluation cycle, and there are some tools and techniques that can help us achieve this. This chapter will introduce some of these through the following topics:
- Evaluating model performance
- Model selection
- Real-world case studies.
- Machine learning design at a glance
Evaluating model performance
Measuring a model's performance is an important machine learning task, and there are many varied parameters and heuristics for doing this....