Accuracy
To understand accuracy properly, first let's explore model evaluation. Model evaluation is an integral part of the model development process. Once you build your model and execute it, the next step is to evaluate your model. A model is built on a training dataset, and evaluating a model's performance on the same training dataset is a bad practice in data science. Once a model is trained on a training dataset, it should be evaluated on a dataset that is completely different from the training dataset. This dataset is known as the test dataset. The objective should always be to build a model that generalizes, which means the model should produce similar (but not the same) results, or relatively similar results, on any dataset. This can only be achieved if we evaluate the model on data that is unknown to it.
The model evaluation process requires a metric that can quantify a model's performance. The simplest metric for model evaluation is accuracy. Accuracy is the fraction of predictions...