Exploring the H2O AutoML leaderboard performance metrics
In Chapter 2, Working with H2O Flow (H2O’s Web UI), once we trained the models on a dataset using H2O AutoML, the results of the models were stored in a leaderboard. The leaderboard was a table containing the model IDs and certain metric values for the respective models (see Figure 2.33).
The leaderboard ranks the models based on a default metric, which is ideally the second column in the table. The ranking metrics depend on what kind of prediction problem the models are trained on. The following list represents the ranking metrics used for the respective ML problems:
- For binary classification problems, the ranking metric is AUC.
- For multi-classification problems, the ranking metric is the mean per-class error.
- For regression problems, the ranking metric is deviance.
Along with the ranking metrics, the leaderboard also provides some additional performance metrics for a better understanding of...