Evaluating the model
We will now move on to collect evaluation metrics for our model, to add to the metadata of the model.
We will work on the evaluate_model.py
file. You can follow along by working in an empty file or by going to https://github.com/PacktPublishing/Machine-Learning-Engineering-with-MLflow/blob/master/Chapter08/psystock-training/evaluate_model.py. Proceed as follows:
- Import the relevant packages—
pandas
andmlflow—f
or reading and running the steps, respectively. We will rely on importing a selection of model-evaluation metrics available insklearn
for classification algorithms, as follows:import pandas as pd import mlflow from sklearn.model_selection import train_test_split from sklearn.metrics import \ classification_report, \ confusion_matrix, \ accuracy_score, \ auc, \ average_precision_score, \ ...