Evaluation Metrics
Model evaluation is indispensable for creating effective models that not only perform well over the data that was used to train the model but also generalize to unseen data. The task of evaluating the model is especially easy when dealing with supervised learning problems, where there is a ground truth that can be compared against the prediction of the model.
Determining the accuracy percentage of the model is crucial for its application to unseen data that does not have a label class to compare to. Considering this, for example, a model with an accuracy of 98% may allow the user to assume that the odds of having an accurate prediction are high, and hence the model should be trusted.
The evaluation of performance, as mentioned previously, should be done over the validation set (dev set) for fine-tuning the model, and over the test set for determining the expected performance of the selected model over unseen data.
Evaluation Metrics for Classification Tasks
A classification...