Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Interpretable Machine Learning with Python

You're reading from   Interpretable Machine Learning with Python Build explainable, fair, and robust high-performance models with hands-on, real-world examples

Arrow left icon
Product type Paperback
Published in Oct 2023
Publisher Packt
ISBN-13 9781803235424
Length 606 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Serg Masís Serg Masís
Author Profile Icon Serg Masís
Serg Masís
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Interpretation, Interpretability, and Explainability; and Why Does It All Matter? 2. Key Concepts of Interpretability FREE CHAPTER 3. Interpretation Challenges 4. Global Model-Agnostic Interpretation Methods 5. Local Model-Agnostic Interpretation Methods 6. Anchors and Counterfactual Explanations 7. Visualizing Convolutional Neural Networks 8. Interpreting NLP Transformers 9. Interpretation Methods for Multivariate Forecasting and Sensitivity Analysis 10. Feature Selection and Engineering for Interpretability 11. Bias Mitigation and Causal Inference Methods 12. Monotonic Constraints and Model Tuning for Interpretability 13. Adversarial Robustness 14. What’s Next for Machine Learning Interpretability? 15. Other Books You May Enjoy
16. Index

To get the most out of this book

  • You will need a Jupyter environment with Python 3.9+. You can do either of the following:
    • Install one on your machine locally via Anaconda Navigator or from scratch with pip.
    • Use a cloud-based one, such as Google Colaboratory, Kaggle Notebooks, Azure Notebooks, or Amazon Sagemaker.
  • The instructions on how to get started will vary accordingly, so we strongly suggest that you search online for the latest instructions for setting them up.
  • For instructions on installing the many packages employed throughout the book, please go to the GitHub repository, which will have the updated instructions in the README.MD file. We expect these to change over time, given how often packages change. We also tested the code with specific versions detailed in the README.MD, so should anything fail with later versions, please install the specific version instead.
  • Individual chapters have instructions on how to check that the right packages are installed.
  • But depending on the way Jupyter was set up, installing packages might be best done through the command line or using conda, so we suggest you adapt these installation instructions to suit your needs.
  • If you are using the digital version of this book, type the code yourself or access the code via the GitHub repository (link available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.
  • If you are not a machine learning practitioner or are a beginner, it is best to read the book sequentially since many concepts are only explained in great detail in earlier chapters. Practitioners skilled in machine learning but not acquainted with interpretability can skim the first three chapters to get the ethical context and concept definitions required to make sense of the rest, but read the rest of the chapters in order. As for advanced practitioners with foundations in interpretability, reading the book in any order should be fine.
  • As for the code, you can read the book without running the code simultaneously or strictly for the theory. But if you plan to run the code, it is best to do it with the book as a guide to assist with the interpretation of outcomes and strengthen your understanding of the theory.
  • While reading the book, think of ways you could use the tools learned, and by the end of it, hopefully, you will be inspired to put this newly gained knowledge into action!

Download the example code files

The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/Interpretable-Machine-Learning-with-Python-2E/. In case there’s an update to the code, it will be updated on the existing GitHub repository. You can also find the hardware and software list of requirements on the repository in the README.MD file.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://packt.link/gbp/9781803235424.

Conventions used

There are several text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter/X handles. For example: “Next, let’s define a device variable because if you have a CUDA-enabled GPU model, inference will perform quicker.”

A block of code is set as follows:

def predict(self, dataset):
    self.model.eval() 
    device = torch.device("cuda" if torch.cuda.is_available()\
                          else "cpu")
    with torch.no_grad(): 
        loader = torch.utils.data.DataLoader(dataset, batch_size = 32)

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

def predict(self, dataset):
    self.model.eval()
    device = torch.device("cuda" if torch.cuda.is_available()\
                          else "cpu")
    with torch.no_grad(): 
        loader = torch.utils.data.DataLoader(dataset, batch_size = 32)

Any command-line input or output is written as follows:

pip install torch

Bold: Indicates a new term, an important word, or words that you see on the screen. For instance, words in menus or dialog boxes appear in the text like this. For example: “The Predictions tab is selected, and this tab has a Data Table to the left where you can select and pin individual data points and a pane with Classification Results to the left.”

Warnings or important notes appear like this.

Tips and tricks appear like this.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image