Preface
Have you ever wondered why so many machine learning projects fail in production?
In many cases, this is because of a lack of generalization of models, leading to unexpected predictions when facing new, unseen data. This is what regularization is about: making sure a model provides the expected predictions, even when facing new data.
In this book, we will explore many forms of regularization. To accomplish this, we will explore two primary avenues for regularization solutions, depending on the recipes in the chapter:
- When given a machine learning model, how do we regularize it? Regularizing is most suited in applications where the model is already imposed (whether there is a legacy solution to be updated or strong requirements) and the training data is fixed, so the only solution is to regularize the model.
- Given a machine learning task, how do we get a robust, well-generalizing solution? This approach is most suited in applications where only the problem is defined, but no strong constraints have been provided yet, so more solutions can be explored.
Hopefully, these recipes will provide you with the necessary tools and techniques to solve most of the machine learning problems you may face that require regularization, as well as a solid practical understanding of the underlying concepts.