Setting up a batch inference job
The code required for this section is in the pystock-inference-api folder
. The MLflow infrastructure is provided in the Docker image accompanying the code as shown in the following figure:
If you have direct access to the artifacts, you can do the following. The code is available under the pystock-inference-batch
directory. In order to set up a batch inference job, we will follow these steps:
- Import the dependencies of your batch job; among the relevant dependencies we include
pandas
,mlflow,
andxgboost
:import pandas as pd import mlflow import xgboost as xgb import mlflow.xgboost import mlflow.pyfunc
- We will next load
start_run
by callingmlflow.start_run
and load the data from theinput.csv
scoring input file:if __name__ == "__main__": with mlflow.start_run(run_name="batch_scoring") as run: ...