Summary
In this chapter, we focused on understanding how we can measure the performance of our ML models and how we can choose one model over the other depending on which is more performant. We started by exploring the H2O AutoML leaderboard metrics since they are the most readily available metrics that AutoML provides out of the box. We first covered what the MSE and the RMSE are, what the difference between them is, and how they are calculated. We then covered what a confusion matrix is and how we calculate accuracy, sensitivity, specificity, precision, and recall from the values in the confusion matrix. With our new understanding of sensitivity and specificity, we understood what a ROC curve and its AUC are, and how they can be used to visually measure the performance of different algorithms, as well as the performance of different models of the same algorithms trained on different thresholds. Building on the ROC-AUC metric, we explored the PR curve, its AUC, and how it overcomes...