Introduction
In the previous chapter, you learned how classification models can solve problems when the response variable is discrete. You also saw different metrics used to assess the performance of such classifiers. You got hands-on experience in building and training binary, multi-class, and multi-label classifiers with TensorFlow.
When evaluating a model, you will face three different situations: model overfitting, model underfitting, and model performing. The last one is the ideal scenario, in which a model is accurately predicting the right outcome and is generalizing to unseen data well.
If a model is underfitting, it means it is neither achieving satisfactory performance nor accurately predicting the target variable. In this case, a data scientist can try tuning different hyperparameters and finding the best combination that will boost the accuracy of the model. Another possibility is to improve the input dataset by handling issues such as the cleanliness of the...