Hands-on lab – detecting bias, model explainability, and training privacy-preserving models
Building a comprehensive system for ML governance is a complex task. In this hands-on lab, you will learn how to use some of SageMaker's built-in functionality to support certain aspects of ML governance.
Overview of the scenario
As an ML SA, you have been asked to identify technology solutions that support a project that has regulatory implications. Specifically, you need to determine the technical approaches for data bias detection, model explainability, and privacy-preserving model training. Follow these steps to get started.
Detecting bias in the training dataset
Let's start the hands-on lesson:
- Launch the SageMaker Studio environment:
- Launch the same SageMaker Studio environment that you have been using.
- Create a new folder called
chapter11
. This will be our working directory for this lab. Create a new Jupyter notebook and name itbias_explainability...