Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Machine Learning Engineering  with Python
Machine Learning Engineering  with Python

Machine Learning Engineering with Python: Manage the lifecycle of machine learning models using MLOps with practical examples , Second Edition

Arrow left icon
Profile Icon Andrew P. McMahon
Arrow right icon
Mex$1025.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6 (36 Ratings)
Paperback Aug 2023 462 pages 2nd Edition
eBook
Mex$573.99 Mex$820.99
Paperback
Mex$1025.99
Subscription
Free Trial
Arrow left icon
Profile Icon Andrew P. McMahon
Arrow right icon
Mex$1025.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6 (36 Ratings)
Paperback Aug 2023 462 pages 2nd Edition
eBook
Mex$573.99 Mex$820.99
Paperback
Mex$1025.99
Subscription
Free Trial
eBook
Mex$573.99 Mex$820.99
Paperback
Mex$1025.99
Subscription
Free Trial

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Machine Learning Engineering with Python

The Machine Learning Development Process

In this chapter, we will define how the work for any successful machine learning (ML) software engineering project can be divided up. Basically, we will answer the question of how you actually organize the doing of a successful ML project. We will not only discuss the process and workflow but we will also set up the tools you will need for each stage of the process and highlight some important best practices with real ML code examples.

In this edition, there will be more details on an important data science and ML project management methodology: Cross-Industry Standard Process for Data Mining (CRISP-DM). This will include a discussion of how this methodology compares to traditional Agile and Waterfall methodologies and will provide some tips and tricks for applying it to your ML projects. There are also far more detailed examples to help you get up and running with continuous integration/continuous deployment (CI/CD) using GitHub Actions, including how to run ML-focused processes such as automated model validation. The advice on getting up and running in an Interactive Development Environment (IDE) has also been made more tool-agnostic, to allow for those using any appropriate IDE. As before, the chapter will focus heavily on a “four-step” methodology I propose that encompasses a discover, play, develop, deploy workflow for your ML projects. This project workflow will be compared with the CRISP-DM methodology, which is very popular in data science circles. We will also discuss the appropriate development tooling and its configuration and integration for a successful project. We will also cover version control strategies and their basic implementation, and setting up CI/CD for your ML project. Then, we will introduce some potential execution environments as the target destinations for your ML solutions. By the end of this chapter, you will be set up for success in your Python ML engineering project. This is the foundation on which we will build everything in subsequent chapters.

As usual, we will conclude the chapter by summarizing the main points and highlighting what this means as we work through the rest of the book.

Finally, it is also important to note that although we will frame the discussion here in terms of ML challenges, most of what you will learn in this chapter can also be applied to other Python software engineering projects. My hope is that the investment in building out these foundational concepts in detail will be something you can leverage again and again in all of your work.

We will explore all of this in the following sections and subsections:

  • Setting up our tools
  • Concept to solution in four steps:
    • Discover
    • Play
    • Develop
    • Deploy

There is plenty of exciting stuff to get through and lots to learn – so let’s get started!

Technical requirements

As in Chapter 1, Introduction to ML Engineering if you want to run the examples provided here, you can create a Conda environment using the environment YAML file provided in the Chapter02 folder of the book’s GitHub repository:

conda env create –f mlewp-chapter02.yml

On top of this, many of the examples in this chapter will require the use of the following software and packages. These will also stand you in good stead for following the examples in the rest of the book:

  • Anaconda
  • PyCharm Community Edition, VS Code, or another Python-compatible IDE
  • Git

You will also need the following:

  • An Atlassian Jira account. We will discuss this more later in the chapter, but you can sign up for one for free at https://www.atlassian.com/software/jira/free.
  • An AWS account. This will also be covered in the chapter, but you can sign up for an account at https://aws.amazon.com/. You will need to add payment details to sign up for AWS, but everything we do in this book will only require the free tier solutions.

The technical steps in this chapter were all tested on both a Linux machine running Ubuntu 22.04 LTS with a user profile that had admin rights and on a Macbook Pro M2 with the setup described in Chapter 1, Introduction to ML Engineering. If you are running the steps on a different system, then you may have to consult the documentation for that specific tool if the steps do not work as planned. Even if this is the case, most of the steps will be the same, or very similar, for most systems. You can also check out all of the code for this chapter in the book’s repository at https://github.com/PacktPublishing/Machine-Learning-Engineering-with-Python-Second-Edition/tree/main/Chapter02. The repo will also contain further resources for getting the code examples up and running.

Setting up our tools

To prepare for the work in the rest of this chapter, and indeed the rest of the book, it will be helpful to set up some tools. At a high level, we need the following:

  • Somewhere to code
  • Something to track our code changes
  • Something to help manage our tasks
  • Somewhere to provision infrastructure and deploy our solution

Let’s look at how to approach each of these in turn:

  • Somewhere to code: First, although the weapon of choice for coding by data scientists is of course Jupyter Notebook, once you begin to make the move toward ML engineering, it will be important to have an IDE to hand. An IDE is basically an application that comes with a series of built-in tools and capabilities to help you to develop the best software that you can. PyCharm is an excellent example for Python developers and comes with a wide variety of plugins, add-ons, and integrations useful to ML engineers. You can download the Community Edition from JetBrains at https://www.jetbrains.com/pycharm/. Another popular development tool is the lightweight but powerful source code editor VS Code. Once you have successfully installed PyCharm, you can create a new project or open an existing one from the Welcome to PyCharm window, as shown in Figure 2.1:
    Figure 2.1 – Opening or creating your PyCharm project

    Figure 2.1: Opening or creating your PyCharm project.

  • Something to track code changes: Next on the list is a code version control system. In this book, we will use GitHub but there are a variety of solutions, all freely available, that are based on the same underlying open-source Git technology. Later sections will discuss how to use these as part of your development workflow, but first, if you do not have a version control system set up, you can navigate to github.com and create a free account. Follow the instructions on the site to create your first repository, and you will be shown a screen that looks something like Figure 2.2. To make your life easier later, you should select Add a README file and Add .gitignore (then select Python). The README file provides an initial Markdown file for you to get started with and somewhere to describe your project. The .gitignore file tells your Git distribution to ignore certain types of files that in general are not important for version control. It is up to you whether you want the repository to be public or private and what license you wish to use. The repository for this book uses the MIT license:
    Figure 2.2 – Setting up your GitHub repository

    Figure 2.2: Setting up your GitHub repository.

    Once you have set up your IDE and version control system, you need to make them talk to each other by using the Git plugins provided with PyCharm. This is as simple as navigating to VCS | Enable Version Control Integration and selecting Git. You can edit the version control settings by navigating to File | Settings | Version Control; see Figure 2.3:

    Figure 2.3 – Configuring version control with PyCharm

    Figure 2.3: Configuring version control with PyCharm.

  • Something to help manage our tasks: You are now ready to write Python and track your code changes, but are you ready to manage or participate in a complex project with other team members? For this, it is often useful to have a solution where you can track tasks, issues, bugs, user stories, and other documentation and items of work. It also helps if this has good integration points with the other tools you will use. In this book, we will use Jira as an example of this. If you navigate to https://www.atlassian.com/software/jira, you can create a free cloud Jira account and then follow the interactive tutorial within the solution to set up your first board and create some tasks. Figure 2.4 shows the task board for this book project, called Machine Learning Engineering in Python (MEIP):

    Figure 2.4: The task board for this book in Jira.

  • Somewhere to provision infrastructure and deploy our solution: Everything that you have just installed and set up is tooling that will really help take your workflow and software development practices to the next level. The last piece of the puzzle is having the tools, technologies, and infrastructure available for deploying the end solution. The management of computing infrastructure for applications was (and often still is) the provision of dedicated infrastructure teams, but with the advent of public clouds, there has been real democratization of this capability for people working across the spectrum of software roles. In particular, modern ML engineering is very dependent on the successful implementation of cloud technologies, usually through the main public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). This book will utilize tools found in the AWS ecosystem, but all of the tools and techniques you will find here have equivalents in the other clouds.

The flip side of the democratization of capabilities that the cloud brings is that teams who own the deployment of their solutions have to gain new skills and understanding. I am a strong believer in the principle that “you build it, you own it, you run it” as far as possible, but this means that as an ML engineer, you will have to be comfortable with a host of potential new tools and principles, as well as owning the performance of your deployed solution. With great power comes great responsibility and all that. In Chapter 5, Deployment Patterns and Tools, we will dive into this topic in detail.

Let’s talk through setting this up.

Setting up an AWS account

As previously stated, you don’t have to use AWS, but that’s what we’re going to use throughout this book. Once it’s set up here, you can use it for everything we’ll do:

  1. To set up an AWS account, navigate to aws.amazon.com and select Create Account. You will have to add some payment details but everything we mention in this book can be explored through the free tier of AWS, where you do not incur a cost below a certain threshold of consumption.
  2. Once you have created your account, you can navigate to the AWS Management Console, where you can see all the services that are available to you (see Figure 2.5):
Figure 2.5 – The AWS Management Console

Figure 2.5: The AWS Management Console.

With our AWS account ready to go, let’s look at the four steps that cover the whole process.

Concept to solution in four steps

All ML projects are unique in some way: the organization, the data, the people, and the tools and techniques employed will never be exactly the same for any two projects. This is good, as it signifies progress as well as the natural variety that makes this such a fun space to work in.

That said, no matter the details, broadly speaking, all successful ML projects actually have a good deal in common. They require the translation of a business problem into a technical problem, a lot of research and understanding, proofs of concept, analyses, iterations, the consolidation of work, the construction of the final product, and its deployment to an appropriate environment. That is ML engineering in a nutshell!

Developing this a bit further, you can start to bucket these activities into rough categories or stages, the results of each being necessary inputs for later stages. This is shown in Figure 2.6:

Figure 2.6 – The stages that any ML project goes through as part of the ML development process

Figure 2.6: The stages that any ML project goes through as part of the ML development process.

Each category of work has a slightly different flavor, but taken together, they provide the backbone of any good ML project. The next few sections will develop the details of each of these categories and begin to show you how they can be used to build your ML engineering solutions. As we will discuss later, it is also not necessary for you to tackle your entire project in four steps like this; you can actually work through each of these steps for a specific feature or part of your overall project. This will be covered in the Selecting a software development methodology section.

Let’s make this a bit more real. The main focus and outputs of every stage can be summarized as shown in Table 2.1:

Stage

Outputs

Discover

Clarity on the business question.

Clear arguments for ML over another approach.

Definition of the KPIs and metrics you want to optimize.

A sketch of the route to value.

Play

Detailed understanding of the data.

Working proof of concept.

Agreement on the model/algorithm/logic that will solve the problem.

Evidence that a solution is doable within realistic resource scenarios.

Evidence that good ROI can be achieved.

Develop

A working solution that can be hosted on appropriate and available infrastructure.

Thorough test results and performance metrics (for algorithms and software).

An agreed retraining and model deployment strategy.

Unit tests, integration tests, and regression tests.

Solution packaging and pipelines.

Deploy

A working and tested deployment process.

Provisioned infrastructure with appropriate security and performance characteristics.

Mode retraining and management processes.

An end-to-end working solution!

Table 2.1: The outputs of the different stages of the ML development process.

IMPORTANT NOTE

You may think that an ML engineer only really needs to consider the latter two stages, develop, and deploy, and that earlier stages are owned by the data scientist or even a business analyst. We will indeed focus mainly on these stages throughout this book and this division of labor can work very well. It is, however, crucially important that if you are going to build an ML solution, you understand all of the motivations and development steps that have gone before – you wouldn’t build a new type of rocket without understanding where you want to go first, would you?

Comparing this to CRISP-DM

The high-level categorization of project steps that we will outline in the rest of this chapter has many similarities to, and some differences from, an important methodology known as CRISP-DM. This methodology was published in 1999 and has since gathered a large following as a way to understand how to build any data project. In CRISP-DM, there are six different phases of activity, covering similar ground to that outlined in the four steps described in the previous section:

  1. Business understanding: This is all about getting to know the business problem and domain area. This becomes part of the Discover phase in the four-step model.
  2. Data understanding: Extending the knowledge of the business domain to include the state of the data, its location, and how it is relevant to the problem. Also included in the Discover phase.
  3. Data preparation: Starting to take the data and transform it for downstream use. This will often have to be iterative. Captured in the Play stage.
  4. Modeling: Taking the prepared data and then developing analytics on top of it; this could now include ML of various levels of sophistication. This is an activity that occurs both in the Play and Develop phases of the four-step methodology.
  5. Evaluation: This stage is concerned with confirming whether the solution will meet the business requirements and performing a holistic review of the work that has gone before. This helps confirm if anything was overlooked or could be improved upon. This is very much part of the Develop and Deploy phases; in the methodology we will describe in this chapter, these tasks are very much more baked in across the project.
  6. Deployment: In CRISP-DM, this was originally focused on deploying simple analytics solutions like dashboards or scheduled ETL pipelines that would run the decided-upon analytics models.

    In the world of model ML engineering, this stage can represent, well, anything talked about in this book! CRISP-DM suggests sub-stages around planning and then reviewing the deployment.

As you can see from the list, many steps in CRISP-DM cover similar topics to those outlined in the four steps I propose. CRISP-DM is extremely popular across the data science community and so its merits are definitely appreciated by a huge number of data professionals across the world. Given this, you might be wondering, “Why bother developing something else then?” Let me convince you of why this is a good idea.

The CRISP-DM methodology is just another way to group the important activities of any data project in order to give them some structure. As you can perhaps see from the brief description of the stages I gave above and if you do further research, CRISP-DM has some potential drawbacks for use in a modern ML engineering project:

  • The process outlined in CRISP-DM is relatively rigid and quite linear. This can be beneficial for providing structure but might inhibit moving fast in a project.
  • The methodology is very big on documentation. Most steps detail writing some kind of report, review, or summary. Writing and maintaining good documentation is absolutely critical in a project but there can be a danger of doing too much.
  • CRISP-DM was written in a world before “big data” and large-scale ML. It is unclear to me whether its details still apply in such a different world, where classic extract-transform-load patterns are only one of so many.
  • CRISP-DM definitely comes from the data world and then tries to move toward the idea of a deployable solution in the last stage. This is laudable, but in my opinion, this is not enough. ML engineering is a different discipline in the sense that it is far closer to classic software engineering than not. This is a point that this book will argue time and again. It is therefore important to have a methodology where the concepts of deployment and development are aligned with software and modern ML techniques all the way through.

The four-step methodology attempts to alleviate some of these challenges and does so in a way that constantly makes reference to software engineering and ML skills and techniques. This does not mean that you should never use CRISP-DM in your projects; it might just be the perfect thing! As with many of the concepts introduced in this book, the important thing is to have many tools in your toolkit so that you can select the one most appropriate for the job at hand.

Given this, let’s now go through the four steps in detail.

Discover

Before you start working to build any solution, it is vitally important that you understand the problem you are trying to solve. This activity is often termed discovery in business analysis and is crucial if your ML project is going to be a success.

The key things to do during the discovery phase are the following:

  • Speak to the customer! And then speak to them again: You must understand the end user requirements in detail if you are to design and build the right system.
  • Document everything: You will be judged on how well you deliver against the requirements, so make sure that all of the key points from your discussion are documented and signed off by members of your team and the customer or their appropriate representative.
  • Define the metrics that matter: It is very easy at the beginning of a project to get carried away and to feel like you can solve any and every problem with the amazing new tool you are going to build. Fight this tendency as aggressively as you can, as it can easily cause major headaches later on. Instead, steer your conversations toward defining a single or very small number of metrics that define what success will look like.
  • Start finding out where the data lives!: If you can start working out what kind of systems you will have to access to get the data you need, this saves you time later and can help you find any major issues before they derail your project.

Using user stories

Once you have spoken to the customer (a few times), you can start to define some user stories. User stories are concise and consistently formatted expressions of what the user or customer wants to see and the acceptance criteria for that feature or unit of work. For example, we may want to define a user story based on the taxi ride example from Chapter 1, Introduction to ML Engineering: “As a user of our internal web service, I want to see anomalous taxi rides and be able to investigate them further.”

Let’s begin!

  1. To add this in Jira, select the Create button.
  2. Next, select Story.
  3. Then, fill in the details as you deem appropriate.

You have now added a user story to your work management tool! This allows you to do things such as create new tasks and link them to this user story or update its status as your project progresses:

Figure 2.8 – An example user story in Jira

Figure 2.7: An example user story in Jira.

The data sources you use are particularly crucial to understand. As you know, garbage in, garbage out, or even worse, no data, no go! The particular questions you have to answer about the data are mainly centered around access, technology, quality, and relevance.

For access and technology, you are trying to pre-empt how much work the data engineers have to do to start their pipeline of work and how much this will hold up the rest of the project. It is therefore crucial that you get this one right.

A good example would be if you find out quite quickly that the main bulk of data you will need lives in a legacy internal financial system with no real modern APIs and no access request mechanism for non-finance team members. If its main backend is on-premises and you need to migrate locked-down financial data to the cloud, but this makes your business nervous, then you know you have a lot of work to do before you type a line of code. If the data already lives in an enterprise data lake that your team has access to, then you are obviously in a better position. Any challenge is surmountable if the value proposition is strong enough, but finding all this out early will save you time, energy, and money later on.

Relevance is a bit harder to find out before you kick off, but you can begin to get an idea. For example, if you want to perform the inventory forecast we discussed in Chapter 1, Introduction to ML Engineering, do you need to pull in customer account information? If you want to create the classifier of premium or non-premium customers as marketing targets, also mentioned in Chapter 1, Introduction to ML Engineering, do you need to have data on social media feeds? The question as to what is relevant will often be less clear-cut than for these examples but an important thing to remember is that you can always come back to it if you really missed something important. You are trying to capture the most important design decisions early, so common sense and lots of stakeholder and subject-matter expert engagement will go a long way.

Data quality is something that you can try to anticipate a little before moving forward in your project with some questions to current users or consumers of the data or those involved in its entry processes. To get a more quantitative understanding though, you will often just need to get your data scientists working with the data in a hands-on manner.

In the next section, we will look at how we develop proof-of-concept ML solutions in the most research-intensive phase, Play.

Play

In the play stage of the project, your aim is to work out whether solving the task even at the proof-of-concept level is feasible. To do this, you might employ the usual data science bread-and-butter techniques of exploratory data analysis and explanatory modeling we mentioned in the last chapter before moving on to creating an ML model that does what you need.

In this part of the process, you are not overly concerned with details of implementation, but with exploring the realms of possibility and gaining an in-depth understanding of the data and the problem, which goes beyond initial discovery work. Since the goal here is not to create production-ready code or to build reusable tools, you should not worry about whether or not the code you are writing is of the highest quality, or using sophisticated patterns. For example, it will not be uncommon to see code that looks something like the following examples (taken, in fact, from the repo for this book):

Figure 2.9 – Some example prototype code that will be created during the play stage

Figure 2.8: Some example prototype code that will be created during the play stage.

Even a quick glance at these screenshots tells you a few things:

  • The code is in a Jupyter notebook, which is run by a user interactively in a web browser.
  • The code sporadically calls methods to simply check or explore elements of the data (for example, df.head() and df.dtypes).
  • There is ad hoc code for plotting (and it’s not very intuitive!).
  • There is a variable called tmp, which is not very descriptive.

All of this is absolutely fine in this more exploratory phase, but one of the aims of this book is to help you understand what is required to take code like this and make it into something suitable for your production ML pipelines. The next section starts us along this path.

Develop

As we have mentioned a few times already, one of the aims of this book is to get you thinking about the fact that you are building software products that just happen to have ML in them. This means a steep learning curve for some of us who have come from more mathematical and algorithmic backgrounds. This may seem intimidating but do not despair! The good news is that we can reuse a lot of the best practices and techniques honed through the software engineering community over several decades. There is nothing new under the sun.

This section explores several of those methodologies, processes, and considerations that can be employed in the development phase of our ML engineering projects.

Selecting a software development methodology

One of the first things we could and should shamelessly replicate as ML engineers is the software development methodologies that are utilized in projects across the globe. One category of these, often referred to as Waterfall, covers project workflows that fit quite naturally with the idea of building something complex (think a building or a car). In Waterfall methodologies, there are distinct and sequential phases of work, each with a clear set of outputs that are needed before moving on to the next phase. For example, a typical Waterfall project may have phases that broadly cover requirements-gathering, analysis, design, development, testing, and deployment (sound familiar?). The key thing is that in a Waterfall-flavored project, when you are in the requirements-gathering phase, you should only be working on gathering requirements, when in the testing phase, you should only be working on testing, and so on. We will discuss the pros and cons of this for ML in the next few paragraphs after introducing another set of methodologies.

The other set of methodologies, termed Agile, began its life after the introduction of the Agile Manifesto in 2001 (https://agilemanifesto.org/). At the heart of Agile development are the ideas of flexibility, iteration, incremental updates, failing fast, and adapting to changing requirements. If you are from a research or scientific background, this concept of flexibility and adaptability based on results and new findings may sound familiar.

What may not be so familiar to you if you have this type of scientific or academic background is that you can still embrace these concepts within a relatively strict framework that is centered around delivery outcomes. Agile software development methodologies are all about finding the balance between experimentation and delivery. This is often done by introducing the concepts of ceremonies (such as Scrums and Sprint Retrospectives) and roles (such as Scrum Master and Product Owner).

Further to this, within Agile development, there are two variants that are extremely popular: Scrum and Kanban. Scrum projects are centered around short units of work called Sprints where the idea is to make additions to the product from ideation through to deployment in that small timeframe. In Kanban, the main idea is to achieve a steady flow of tasks from an organized backlog into work in progress through to completed work.

All of these methodologies (and many more besides) have their merits and their detractions. You do not have to be married to any of them; you can chop and change between them. For example, in an ML project, it may make sense to do some post-deployment work that has a focus on maintaining an already existing service (sometimes termed a business-as-usual activity) such as further model improvements or software optimizations in a Kanban framework. It may make sense to do the main delivery of your core body of work in Sprints with very clear outcomes. But you can chop and change and see what fits best for your use cases, your team, and your organization.

But what makes applying these types of workflows to ML projects different? What do we need to think about in this world of ML that we didn’t before? Well, some of the key points are the following:

  • You don’t know what you don’t know: You cannot know whether you will be able to solve the problem until you have seen the data. Traditional software engineering is not as critically dependent on the data that will flow through the system as ML engineering is. We can know how to solve a problem in principle, but if the appropriate data does not exist in sufficient quantity or is of poor quality, then we can’t solve the problem in practice.
  • Your system is alive: If you build a classic website, with its backend database, shiny frontend, amazing load-balancing, and other features, then realistically, if the resource is there, it can just run forever. Nothing fundamental changes about the website and how it runs over time. Clicks still get translated into actions and page navigation still happens the same way. Now, consider putting some ML-generated advertising content based on typical user profiles in there. What is a typical user profile and does that change with time? With more traffic and more users, do behaviors that we never saw before become the new normal? Your system is learning all the time and that leads to the problems of model drift and distributional shift, as well as more complex update and rollback scenarios.
  • Nothing is certain: When building a system that uses rule-based logic, you know what you are going to get each and every time. If X, then Y means just that, always. With ML models, it is often much harder to know what the answer is when you ask the question, which is in fact why these algorithms are so powerful.

But it does mean that you can have unpredictable behavior, either for the reasons discussed previously or simply because the algorithm has learned something that is not obvious about the data to a human observer, or, because ML algorithms can be based on probabilistic and statistical concepts, results come attached to some uncertainty or fuzziness. A classic example is when you apply logistic regression and receive the probability of the data point belonging to one of the classes. It’s a probability so you cannot say with certainty that it is the case; just how likely it is! This is particularly important to consider when the outputs of your ML system will be leveraged by users or other systems to make decisions.

Given these issues, in the next section, we’ll try and understand what development methodologies can help us when we build our ML solutions. In Table 2.2, we can see some advantages and disadvantages of each of these Agile methodologies for different stages and types of ML engineering projects:

Methodology

Pros

Cons

Agile

Flexibility is expected.

Faster dev to deploy cycles.

If not well managed, can easily have scope drift.

Kanban or Sprints may not work well for some projects.

Waterfall

Clearer path to deployment.

Clear staging and ownership of tasks.

Lack of flexibility.

Higher admin overheads.

Table 2.2: Agile versus Waterfall for ML development.

Let’s move on to the next section!

Package management (conda and pip)

If I told you to write a program that did anything in data science or ML without using any libraries or packages and just pure Python, you would probably find this quite difficult to achieve in any reasonable amount of time, and incredibly boring! This is a good thing. One of the really powerful features of developing software in Python is that you can leverage an extensive ecosystem of tools and capabilities relatively easily. The flip side of this is that it would be very easy for managing the dependencies of your code base to become a very complicated and hard-to-replicate task. This is where package and environment managers such as pip and conda come in.

pip is the standard package manager in Python and the one recommended for use by the Python Package Authority.

It retrieves and installs Python packages from PyPI, the Python Package Index. pip is super easy to use and is often the suggested way to install packages in tutorials and books.

conda is the package and environment manager that comes with the Anaconda and Miniconda Python distributions. A key strength of conda is that although it comes from the Python ecosystem, and it has excellent capabilities there, it is actually a more general package manager. As such, if your project requires dependencies outside Python (the NumPy and SciPy libraries being good examples), then although pip can install these, it can’t track all the non-Python dependencies, nor manage their versions. With conda, this is solved.

You can also use pip within conda environments, so you can get the best of both worlds or use whatever you need for your project. The typical workflow that I use is to use conda to manage the environments I create and then use that to install any packages I think may require non-Python dependencies that perhaps are not captured well within pip, and then I can use pip most of the time within the created conda environment. Given this, throughout the book, you may see pip or conda installation commands used interchangeably. This is perfectly fine.

To get started with Conda, if you haven’t already, you can download the Individual distribution installer from the Anaconda website (https://www.anaconda.com/products/individual). Anaconda comes with some Python packages already installed, but if you want to start from a completely empty environment, you can download Miniconda from the same website instead (they have the exact same functionality; you just start from a different base).

The Anaconda documentation is very helpful for getting you up to speed with the appropriate commands, but here is a quick tour of some of the key ones.

First, if we want to create a conda environment called mleng with Python version 3.8 installed, we simply execute the following in our terminal:

conda env --name mleng python=3.10

We can then activate the conda environment by running the following:

source activate mleng

This means that any new conda or pip commands will install packages in this environment and not system-wide.

We often want to share the details of our environment with others working on the same project, so it can be useful to export all the package configurations to a .yml file:

conda export env > environment.yml

The GitHub repository for this book contains a file called mleng-environment.yml for you to create your own instance of the mleng environment. The following command creates an environment with this configuration using this file:

conda env create --file environment.yml

This pattern of creating a conda environment from an environment file is a nice way to get your environments set up for running the examples in each of the chapters in the book. So, the Technical requirements section in each chapter will point to the name of the correct environment YAML file contained in the book’s repository.

These commands, coupled with your classic conda or pip install command, will set you up for your project quite nicely!

conda install <package-name>

Or

pip install <package-name>

I think it’s always a good practice to have many options for doing something, and in general, this is good engineering practice. So given that, now that we have covered the classic Python environment and package managers in conda and pip, we will cover one more package manager. This is a tool that I like for its ease of use and versatility. I think it provides a nice extension of the capabilities of conda and pip and can be used to complement them nicely. This tool is called Poetry and it is what we turn to now.

Poetry

Poetry is another package manager that has become very popular in recent years. It allows you to manage your project’s dependencies and package information into a single configuration file in a similar way to the environment YAML file we discussed in the section on Conda. Poetry’s strength lies in its far superior ability to help you manage complex dependencies and ensure “deterministic” builds, meaning that you don’t have to worry about the dependency of a package updating in the background and breaking your solution. It does this via the use of “lock files” as a core feature, as well as in-depth dependency checking. This means that reproducibility can often be easier in Poetry. It is important to call out that Poetry is focused on Python package management specifically, while Conda can also install and manage other packages, for example, C++ libraries. One way to think of Poetry is that it is like an upgrade of the pip Python installation package, but one that also has some environment management capability. The next steps will explain how to set up and use Poetry for a very basic use case.

We will build on this with some later examples in the book. First, follow these steps:

  1. First, as usual, we will install Poetry:
    pip install poetry
    
  2. After Poetry is installed, you can create a new project using the poetry new command, followed by the name of your project:
    poetry new mleng-with-python
    
  3. This will create a new directory named mleng-with-python with the necessary files and directories for a Python project. To manage your project’s dependencies, you can add them to the pyproject.toml file in the root directory of your project. This file contains all of the configuration information for your project, including its dependencies and package metadata.

    For example, if you are building a ML project and want to use the scikit-learn library, you would add the following to your pyproject.toml file:

    [tool.poetry.dependencies]
    scikit-learn = "*"
    
  1. You can then install the dependencies for your project by running the following command. This will install the scikit-learn library and any other dependencies specified in your pyproject.toml file:
    poetry install
    
  2. To use a dependency in your project, you can simply import it in your Python code like so:
    from sklearn import datasets
    from sklearn.model_selection import train_test_split
    from sklearn.linear_model import LogisticRegression
    

As you can see, getting started with Poetry is very easy. We will return to using Poetry throughout the book in order to give you examples that complement the knowledge of Conda that we will develop. Chapter 4, Packaging Up, will discuss this in detail and will show you how to get the most out of Poetry.

Code version control

If you are going to write code for real systems, you are almost certainly going to do it as part of a team. You are also going to make your life easier if you can have a clean audit trail of changes, edits, and updates so that you can see how the solution has developed. Finally, you are going to want to cleanly and safely separate out the stable versions of the solution that you are building and that can be deployed versus more transient developmental versions. All of this, thankfully, is taken care of by source code version control systems, the most popular of which is Git.

We will not go into how Git works under the hood here (there are whole books on the topic!) but we will focus on understanding the key practical elements of using it:

  1. You already have a GitHub account from earlier in the chapter, so the first thing to do is to create a repository with Python as the language and initialize README.md and .gitignore files. The next thing to do is to get a local copy of this repository by running the following command in Bash, Git Bash, or another terminal:
    git clone <repo-name>
    
  2. Now that you have done this, go into the README.md file and make some edits (anything will do). Then, run the following commands to tell Git to monitor this file and to save your changes locally with a message briefly explaining what these are:
    git add README.md
    git commit -m "I've made a nice change …"
    

    This now means that your local Git instance has stored what you’ve changed and is ready to share that with the remote repo.

  1. You can then incorporate these changes into the main branch by doing the following:
    git push origin main
    

    If you now go back to the GitHub site, you will see that the changes have taken place in your remote repository and that the comments you added have accompanied the change.

  1. Other people in your team can then get the updated changes by running the following:
    git pull origin main
    

These steps are the absolute basics of Git and there is a ton more you can learn online. What we will do now, though, is start setting up our repo and workflow in a way that is relevant to ML engineering.

Git strategies

The presence of a strategy for using version control systems can often be a key differentiator between the data science and ML engineering aspects of a project. It can sometimes be overkill to define a strict Git strategy for exploratory and basic modeling stages (Discover and Play) but if you want to engineer something for deployment (and you are reading this book, so this is likely where your head is at), then it is fundamentally important.

Great, but what do we mean by a Git strategy?

Well, let’s imagine that we just try to develop our solution without a shared direction on how to organize the versioning and code.

ML engineer A wants to start building some of the data science code into a Spark ML pipeline (more on this later) so creates a branch from main called pipeline1spark:

git checkout -b pipeline1spark

They then get to work on the branch and writes some nice code in a new file called pipeline.py:

# Configure an ML pipeline, which consists of three stages: tokenizer, hashingTF, and lr.
tokenizer = Tokenizer(inputCol="text", outputCol="words")
hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(),
                      outputCol="features")
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])

Great, they’ve made some excellent progress in translating some previous sklearn code into Spark, which was deemed more appropriate for the use case. They then keep working in this branch because it has all of their additions, and they think it’s better to do everything in one place. When they want to push the branch to the remote repository, they run the following commands:

git push origin pipeline1spark

ML engineer B comes along, and they want to use ML engineer A’s pipeline code and build some extra steps around it. They know engineer A’s code has a branch with this work, so they know enough about Git to create another branch with A’s code in it, which B calls pipeline:

git pull origin pipeline1spark
git checkout pipeline1spark
git checkout -b pipeline

They then add some code to read the parameters for the model from a variable:

lr = LogisticRegression(maxIter=model_config["maxIter"], 
                        regParam=model_config["regParam"])

Cool, engineer B has made an update that is starting to abstract away some of the parameters. They then push their new branch to the remote repository:

git push origin pipeline

Finally, ML engineer C joins the team and wants to get started on the code. Opening up Git and looking at the branches, they see there are three:

main
pipeline1spark
pipeline

So, which one should be taken as the most up to date? If they want to make new edits, where should they branch from? It isn’t clear, but more dangerous than that is if they are tasked with pushing deployment code to the execution environment, they may think that main has all the relevant changes. On a far busier project that’s been going on for a while, they may even branch off from main and duplicate some of B and C’s work! In a small project, you would waste time going on this wild goose chase; in a large project with many different lines of work, you would have very little chance of maintaining a good workflow:

# Branch pipeline1spark - Commit 1 (Engineer A)
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
# Branch pipeline - Commit 2 (Engineer B)
lr = LogisticRegression(maxIter=model_config["maxIter"], 
                        regParam=model_config["regParam"])
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])

If these commits both get pushed to the main branch at the same time, then we will get what is called a merge conflict, and in each case, the engineer will have to choose which piece of code to keep, the current or new example. This would look something like this if engineer A pushed their changes to main first:

<<<<<<< HEAD
lr = LogisticRegression(maxIter=10, regParam=0.001)
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
=======
lr = LogisticRegression(maxIter=model_config["maxIter"], 
                        regParam=model_config["regParam"])
pipeline = Pipeline(stages=[tokenizer, hashingTF, lr])
>>>>>>> pipeline

The delimiters in the code show that there has been a merge conflict and that it is up to the developer to select which of the two versions of the code they want to keep.

IMPORTANT NOTE

Although, in this simple case, we could potentially trust the engineers to select the better code, allowing situations like this to occur very frequently is a huge risk to your project. This not only wastes a huge amount of precious development time but it could also mean that you actually end up with worse code!

The way to avoid confusion and extra work like this is to have a very clear strategy for the use of the version control system in place, such as the one we will now explore.

The Gitflow workflow

The biggest problem with the previous example was that all of our hypothetical engineers were actually working on the same piece of code in different places. To stop situations like this, you have to create a process that your team can all follow – in other words, a version control strategy or workflow.

One of the most popular of these strategies is the Gitflow workflow. This builds on the basic idea of having branches that are dedicated to features and extends it to incorporate the concept of releases and hotfixes, which are particularly relevant to projects with a continuous deployment element.

The main idea is we have several types of branches, each with clear and specific reasons for existing:

  • Main contains your official releases and should only contain the stable version of your code.
  • Dev acts as the main point for branching from and merging to for most work in the repository; it contains the ongoing development of the code base and acts as a staging area before main.
  • Feature branches should not be merged straight into the main branch; everything should branch off from dev and then be merged back into dev.
  • Release branches are created from dev to kick off a build or release process before being merged into main and dev and then deleted.
  • Hotfix branches are for removing bugs in deployed or production software. You can branch this from main before merging into main and dev when done.

This can all be summarized diagrammatically as in Figure 2.9, which shows how the different branches contribute to the evolution of your code base in the Gitflow workflow:

Figure 2.11 – The Gitflow Workflow

Figure 2.9: The Gitflow workflow.

This diagram is taken from https://lucamezzalira.com/2014/03/10/git-flow-vs-github-flow/. More details can be found at https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow.

If your ML project can follow this sort of strategy (and you don’t need to be completely strict about this if you want to adapt it), you will likely see a drastic improvement in productivity, code quality, and even documentation:

Figure 2.12 – Example code changes upon a pull request in GitHub
Figure 2.10: Example code changes upon a pull request in GitHub.

One important aspect we haven’t discussed yet is the concept of code reviews. These are triggered in this process by what is known as a pull request, where you make known your intention to merge into another branch and allow another team member to review your code before this executes. This is the natural way to introduce code review to your workflow. You do this whenever you want to merge your changes and update them into dev or main branches. The proposed changes can then be made visible to the rest of the team, where they can be debated and iterated on with further commits before completing the merge.

This enforces code review to improve quality, as well as creating an audit trail and safeguards for updates. Figure 2.10 shows an example of how changes to code are made visible for debate during a pull request in GitHub.

Now that we have discussed some of the best practices for applying version control to your code, let’s explore how to version control the models you produce during your ML project.

Model version control

In any ML engineering project, it is not only code changes that you have to track clearly but also changes in your models. You want to track changes not only in the modeling approach but also in performance when new or different data is fed into your chosen algorithms. One of the best tools for tracking these kinds of changes and providing version control of ML models is MLflow, an open-source platform from Databricks under the stewardship of the Linux Foundation.

To install MLflow, run the following command in your chosen Python environment:

pip install mlflow

The main aim of MLflow is to provide a platform via which you can log model experiments, artifacts, and performance metrics. It does this through some very simple APIs provided by the Python mlflow library, interfaced to selected storage solutions through a series of centrally developed and community plugins. It also comes with functionality for querying, analyzing, and importing/exporting data via a Graphical User Interface (GUI), which will look something like Figure 2.11:

Figure 2.13 – The MLflow Tracking Server UI with some forecasting runs

Figure 2.11: The MLflow tracking server UI with some forecasting runs.

The library is extremely easy to use. In the following example, we will take the sales forecasting example from Chapter 1, Introduction to ML Engineering, and add some basic MLflow functionality for tracking performance metrics and saving the trained Prophet model:

  1. First, we make the relevant imports, including MLflow’s pyfunc module, which acts as a general interface for saving and loading models that can be written as Python functions. This facilitates working with libraries and tools not natively supported in MLflow (such as the fbprophet library):
    import pandas as pd
    from fbprophet import Prophet
    from fbprophet.diagnostics import cross_validation
    from fbprophet.diagnostics import performance_metrics
    import mlflow
    import mlflow.pyfunc
    
  2. To create a more seamless integration with the forecasting models from fbprophet, we define a small wrapper class that inherits from the mlflow.pyfunc.PythonModel object:
    class FbProphetWrapper(mlflow.pyfunc.PythonModel):
        def __init__(self, model):
            self.model = model
            super().__init__()
        def load_context(self, context):
            from fbprophet import Prophet
            return
        def predict(self, context, model_input):
            future = self.model.make_future_dataframe(
                periods=model_input["periods"][0])
            return self.model.predict(future)
    

    We now wrap the functionality for training and prediction into a single helper function called train_predict() to make running multiple times simpler. We will not define all of the details inside this function here but let’s run through the main pieces of MLflow functionality contained within it.

  1. First, we need to let MLflow know that we are now starting a training run we wish to track:
    with mlflow.start_run():
        # Experiment code and mlflow logging goes in here
    
  2. Inside this loop, we then define and train the model, using parameters defined elsewhere in the code:
    # create Prophet model
    model = Prophet(
        yearly_seasonality=seasonality_params['yearly'],
        weekly_seasonality=seasonality_params['weekly'],
        daily_seasonality=seasonality_params['daily']
    )
    # train and predict
    model.fit(df_train)
    
  3. We then perform some cross-validation to calculate some metrics we would like to log:
    # Evaluate Metrics
    df_cv = cross_validation(model, initial="730 days", 
                             period="180 days", horizon="365 days")
    df_p = performance_metrics(df_cv)
    
  4. We can log these metrics, for example, the Root Mean Squared Error (RMSE) here, to our MLflow server:
    # Log parameter, metrics, and model to MLflow
    mlflow.log_metric("rmse", df_p.loc[0, "rmse"])
    
  5. Then finally, we can use our model wrapper class to log the model and print some information about the run:
    mlflow.pyfunc.log_model("model", python_model=FbProphetWrapper(model))
    print(
        "Logged model with URI: runs:/{run_id}/model".format(
            run_id=mlflow.active_run().info.run_id
        )
    )
    
  6. With only a few extra lines, we have started to perform version control on our models and track the statistics of different runs!

There are many different ways to save the ML model you have built to MLflow (and in general), which is particularly important when tracking model versions. Some of the main options are as follows:

  • pickle: pickle is a Python library for object serialization that is often used for the export of ML models that are written in scikit-learn or pipelines in the wider scipy ecosystem (https://docs.python.org/3/library/pickle.html#module-pickle). Although it is extremely easy to use and often very fast, you must be careful when exporting your models to pickle files because of the following:
    • Versioning: When you pickle an object, you have to unpickle it in other programs using the same version of pickle for stability reasons. This adds more complexity to managing your project.
    • Security: The documentation for pickle states clearly that it is not secure and that it is very easy to construct malicious pickles, which will execute dangerous code upon unpickling. This is a very important consideration, especially as you move toward production.

    In general, as long as the lineage of the pickle files you use is known and the source is trusted, they are OK to use and a very simple and fast way to share your models!

  • joblib: joblib is a general-purpose pipelining library in Python that is very powerful but lightweight. It has a lot of really useful capabilities centered around caching, parallelizing, and compression that make it a very versatile tool for saving and reading in your ML pipelines. It is also particularly fast for storing large NumPy arrays, so is useful for data storage. We will use joblib more in later chapters. It is important to note that joblib suffers from the same security issues as pickle, so knowing the lineage of your joblib files is incredibly important.
  • JSON: If pickle and joblib aren’t appropriate, you can serialize your model and its parameters in JSON format. This is good because JSON is a standardized text serialization format that is commonly used across a variety of solutions and platforms. The caveat to using JSON serialization of your models is that you often have to manually define the JSON structure with the relevant parameters you want to store. So, it can create a lot of extra work. Several ML libraries in Python have their own export to JSON functionality, for example, the deep learning package Keras, but they can all result in quite different formats.
  • MLeap: MLeap is a serialization format and execution engine based on the Java Virtual Machine (JVM). It has integrations with Scala, PySpark, and Scikit-Learn but you will often see it used in examples and tutorials for saving Spark pipelines, especially for models built with Spark ML. This focus means it is not the most flexible of formats but is very useful if you are working in the Spark ecosystem.
  • ONNX: The Open Neural Network Exchange (ONNX) format is aimed at being completely cross-platform and allowing the exchange of models between the main ML frameworks and ecosystems. The main downside of ONNX is that (as you can guess from the name) it is mainly aimed at neural network-based models, with the exception of its scikit-learn API. It is an excellent option if you are building a neural network though.

In Chapter 3, From Model to Model Factory, we will export our models to MLflow using some of these formats, but they are all compatible with MLflow and so you should feel comfortable using them as part of your ML engineering workflow.

The final section of this chapter will introduce some important concepts for planning how you wish to deploy your solution, prefacing more detailed discussions later in the book.

Deploy

The final stage of the ML development process is the one that really matters: how do you get the amazing solution you have built out into the real world and solve your original problem? The answer has multiple parts, some of which will occupy us more thoroughly later in this book but will be outlined in this section. If we are to successfully deploy our solution, first of all, we need to know our deployment options: what infrastructure is available and is appropriate for the task? We then need to get the solution from our development environment onto this production infrastructure so that, subject to appropriate orchestration and controls, it can execute the tasks we need it to and surface the results where it has to. This is where the concepts of DevOps and MLOps come into play.

Let’s elaborate on these two core concepts, laying the groundwork for later chapters and exploring how to begin deploying our work.

Knowing your deployment options

In Chapter 5, Deployment Patterns and Tools, we will cover in detail what you need to get your ML engineering project from the develop to deploy stage, but to pre-empt that and provide a taster of what is to come, let’s explore the different types of deployment options we have at our disposal:

  • On-premises deployment: The first option we have is to ignore the public cloud altogether and deploy our solutions in-house on owned infrastructure. This option is particularly popular and necessary for a lot of large institutions with a lot of legacy software and strong regulatory constraints on data location and processing. The basic steps for deploying on-premises are the same as deploying on the cloud but often require a lot more involvement from other teams with particular specialties. For example, if you are in the cloud, you often do not need to spend a lot of time configuring networking or implementing load balancers, whereas on-premises solutions will require these.

    The big advantage of on-premises deployment is security and peace of mind that none of your data is going to traverse your company firewall. The big downsides are that it requires a larger investment upfront for hardware and that you have to expend a lot of effort to successfully configure and manage that hardware effectively. We will not be discussing on-premises deployment in detail in this book, but all of the concepts we will employ around software development, packaging, environment management, and training and prediction systems still apply.

  • Infrastructure-as-a-Service (IaaS): If you are going to use the cloud, one of the lowest levels of abstraction you have access to for deployment is IaaS solutions. These are typically based on the concept of virtualization, such that servers with a variety of specifications can be spun up at the user’s will. These solutions often abstract away the need for maintenance and operations as part of the service. Most importantly, they allow extreme scalability of your infrastructure as you need it. Have to run 100 more servers next week? No problem, just scale up your IaaS request and there it is. Although IaaS solutions are a big step up from fully managed on-premises infrastructure, there are still several things you need to think about and configure. The balance in cloud computing is always over how easy you want things to be versus what level of control you want to have. IaaS maximizes control but minimizes (relative) ease compared to some other solutions. In AWS, Simple Storage Service (S3) and Elastic Compute Cloud (EC2) are good examples of IaaS offerings.
  • Platform-as-a-Service (PaaS): PaaS solutions are the next level up in terms of abstraction and usually provide you with a lot of capabilities without needing to know exactly what is going on under the hood. This means you can focus solely on the development tasks that the platform is geared up to support, without worrying about underlying infrastructure at all. One good example is AWS Lambda functions, which are serverless functions that can scale almost without limit.

All you are required to do is enter the main piece of code you want to execute inside the function. Another good example is Databricks, which provides a very intuitive UI on top of the Spark cluster infrastructure, with the ability to provision, configure, and scale up these clusters almost seamlessly.

Being aware of these different options and their capabilities can help you design your ML solution and ensure that you focus your team’s engineering effort where it is most needed and will be most valuable. If your ML engineer is working on configuring routers, for example, you have definitely gone wrong somewhere.

But once you have selected the components you’ll use and provisioned the infrastructure, how do you integrate these together and manage your deployment and update cycles? This is what we will explore now.

Understanding DevOps and MLOps

A very powerful idea in modern software development is that your team should be able to continuously update your code base as needed, while testing, integrating, building, packaging, and deploying your solution should be as automated as possible. This then means these processes can happen on an almost continual basis without big pre-planned buckets of time being assigned to update cycles. This is the main idea behind CI/CD. CI/CD is a core part of DevOps and its ML-focused cousin MLOps, which both aim to bring together software development and post-deployment operations. Several of the concepts and solutions we will develop in this book will be built up so that they naturally fit within an MLOps framework.

The CI part is mainly focused on the stable incorporation of ongoing changes to the code base while ensuring functionality remains stable. The CD part is all about taking the resultant stable version of the solution and pushing it to the appropriate infrastructure.

Figure 2.12 shows a high-level view of this process:

Figure 2.14 – A high-level view of CI/CD processes

Figure 2.12: A high-level view of CI/CD processes.

In order to make CI/CD a reality, you need to incorporate tools that help automate tasks that you would traditionally perform manually in your development and deployment process. For example, if you can automate the running of tests upon merging of code, or the pushing of your code artifacts/models to the appropriate environment, then you are well on your way to CI/CD.

We can break this out further and think of the different types of tasks that fall into the DevOps or MLOps lifecycles for a solution. Development tasks will typically cover all of the activities that take you from a blank screen on your computer to a working piece of software. This means that development is where you spend most of your time in a DevOps or MLOps project. This covers everything from writing the code to formatting it correctly and testing it.

Table 2.3 splits out these typical tasks and provides some details on how they build on each other, as well as typical tools you could use in your Python stack for enabling them.

Lifecycle Stage

Activity

Details

Tools

Dev

Testing

Unit tests: tests aimed at testing the functionality smallest pieces of code.

pytest or unittest

Integration tests: ensure that interfaces within the code and to other solutions work.

Selenium

Acceptance tests: business focused tests.

Behave

UI tests: ensuring any frontends behave as expected.

Linting

Raise minor stylistic errors and bugs.

flake8 or bandit

Formatting

Enforce well-formatted code automatically.

black or sort

Building

The final stage of bringing the solution together.

Docker, twine, or pip

Table 2.3: Details of the development activities carried out in any DevOps or MLOps project.

Next, we can think about the ML activities within MLOps, which this book will be very concerned with. This covers all of the tasks that a classic Python software engineer would not have to worry about, but that are crucially important to get right for ML engineers like us. This includes the development of capabilities to automatically train the ML models, to run the predictions or inferences the model should generate, and to bring that together inside code pipelines. It also covers the staging and management of the versions of your models, which heavily complements the idea of versioning your application code, as we do using tools like Git. Finally, an ML engineer also has to consider that they have to build out specific monitoring capabilities for the operational mode of their solution, which is not covered in traditional DevOps workflows. For an ML solution, you may have to consider monitoring things like precision, recall, the f1-score, population stability, entropy, and data drift in order to know if the model component of your solution is behaving within a tolerable range. This is very different from classic software engineering as it requires a knowledge of how ML models work, how they can go wrong, and a real appreciation of the importance of data quality to all of this. This is why ML engineering is such an exciting place to be! See Table 2.4 for some more details on these types of activities.

Lifecycle Stage

Activity

Details

Tools

ML

Training

Train the model .

Any ML package.

Predicting

Run the predictions or inference steps.

Any ML package.

Building

Creating the pipelines and application logic in which the model is embedded.

sklearn pipelines, Spark ML pipelines, ZenML.

Staging

Tag and release the appropriate version of your models and pipelines.

MLflow or Comet.ml.

Monitoring

Track the solution performance and raise alerts when necessary.

Seldon, Neptune.ai, Evidently.ai, or Arthur.ai.

Table 2.4: Details on the ML-centered activities carried out during an MLOps project.

Finally, in either DevOps or MLOps, there is the Ops piece, which refers to Operations. This is all about how the solution will actually run, how it will alert you if there is an issue, and if it can recover successfully. Naturally then, operations will cover activities relating to the final packaging, build, and release of your solution. It also has to cover another type of monitoring, which is different from the performance monitoring of ML models. This monitoring has more of a focus on infrastructure utilization, stability, and scalability, on solution latency, and on the general running of the wider solution. This part of the DevOps and MLOps lifecycle is quite mature in terms of tooling, so there are many options available. Some information to get you started is presented in Table 2.5.

Lifecycle Stage

Activity

Details

Tools

Ops

Releasing

Taking the software you have built and storing it somewhere central for reuse.

Twine, pip, GitHub, or BitBucket.

Deploying

Pushing the software you have built to the appropriate target location and environment.

Docker, GitHub Actions, Jenkins, TravisCI, or CircleCI.

Monitoring

Tracking the performance and utilization of the underlying infrastructure and general software performance, alerting where necessary.

DataDog, Dynatrace, or Prometheus.

Table 2.5: Details of the activities carried out in order to make a solution operational in a DevOps or MLOps project.

Now that we have elucidated the core concepts needed across the MLOps lifecycle, in the next section, we will discuss how to implement CI/CD practices so that we can start making this a reality in our ML engineering projects. We will also extend this to cover automated testing of the performance of your ML models and pipelines, and to perform automated retraining of your ML models.

Building our first CI/CD example with GitHub Actions

We will use GitHub Actions as our CI/CD tool in this book, but there are several other tools available that do the same job. GitHub Actions is available to anyone with a GitHub account, has a very useful set of documentation, https://docs.github.com/en/actions, and is extremely easy to start using, as we will show now.

When using GitHub Actions, you have to create a .yml file that tells GitHub when to perform the required actions and, of course, what actions to perform. This .yml file should be put in a folder called .github/workflows in the root directory of your repository. You will have to create this if it doesn’t already exist. We will do this in a new branch called feature/actions. Create this branch by running:

git checkout –b feature/actions

Then, create a .yml file called github-actions-basic.yml. In the following steps, we will build up this example .yml file for a Python project where we automatically install dependencies, run a linter (a solution to check for bugs, syntax errors, and other issues), and then run some unit tests. This example comes from the GitHub Starter Workflows repository (https://github.com/actions/starter-workflows/blob/main/ci/python-package-conda.yml). Open up github-actions-basic.yml and then execute the following:

  1. First, you define the name of the GitHub Actions workflow and what Git event will trigger it:
    name: Python package
    on: [push]
    
  2. You then list the jobs you want to execute as part of the workflow, as well as their configuration. For example, here we have one job called build, which we want to run on the latest Ubuntu distribution, and we want to attempt the build using several different versions of Python:
    jobs:
      build:
        runs-on: ubuntu-latest
        strategy:
          matrix:
            python-version: [3.9, 3.10]
    
  3. You then define the steps that execute as part of the job. Each step is separated by a hyphen and is executed as a separate command. It is important to note that the uses keyword grabs standard GitHub Actions; for example, in the first step, the workflow uses the v2 version of the checkout action, and the second step sets up the Python versions we want to run in the workflow:
    steps:
    - uses: actions/checkout@v3
    - name: Set up Python ${{ matrix.python-version }}
    uses: actions/setup-python@v4
    with:
      python-version: ${{ matrix.python-version }}
    
  4. The next step installs the relevant dependencies for the solution using pip and a requirements.txt file (but you can use conda of course!):
    - name: Install dependencies
    run: |
      python -m pip install --upgrade pip
      pip install flake8 pytest
      if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
    - name: Lint with flake8
    
  5. We then run some linting:
    - name: Lint with flake8
    run: |
      # stop the build if there are Python syntax errors or undefined 
      names
      flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
      # exit-zero treats all errors as warnings. The GitHub editor is 
      127 chars wide
      flake8 . --count --exit-zero --max-complexity=10 --max-line-
      length=127 --statistics
    
  6. Finally, we run our tests using our favorite Python testing library. For this step, we do not want to run through the entire repository, as it is quite complex, so for this example, we use the working-directory keyword to only run pytest in that directory.

    Since it contains a simple test function in test_basic.py, this will automatically pass:

    - name: Test with pytest
    run: pytest
    working-directory: Chapter02
    

We have now built up the GitHub Actions workflow; the next stage is to show it running. This is taken care of automatically by GitHub, all you have to do is push to the remote repository. So, add the edited .yml file, commit it, and then push it:

git add .github/workflows/github-actions-basic.yml
git commit –m "Basic CI run with dummy test"
git push origin feature/actions

After you have run these commands in the terminal, you can navigate to the GitHub UI and then click on Actions in the top menu bar. You will then be presented with a view of all action runs for the repository like that in Figure 2.13.

Figure 2.13: The GitHub Actions run as viewed from the GitHub UI.

If you then click on the run, you will be presented with details of all jobs that ran within the Actions run, as shown in Figure 2.14.

Figure 2.14: GitHub Actions run details from the GitHub UI.

Finally, you can go into each job and see the steps that were executed, as shown in Figure 2.15. Clicking on these will also show the outputs from each of the steps. This is extremely useful for analyzing any failures in the run.

Figure 2.15: The GitHub Actions run steps as shown on the GitHub UI.

What we have shown so far is an example of CI. For this to be extended to cover CD, we need to include steps that push the produced solution to its target host destination. Examples are building a Python package and publishing it to pip, or creating a pipeline and pushing it to another system for it to be picked up and run. This latter example will be covered with an Airflow DAG in Chapter 5, Deployment Patterns and Tools. And that, in a nutshell, is how you start building your CI/CD pipelines. As mentioned, later in the book, we will build workflows specific to our ML solutions.

Now we will look at how we take CI/CD concepts to the next level for ML engineering and build some tests for our model performance, which can then also be triggered as part of continuous processes.

Continuous model performance testing

As ML engineers, we not only care about the core functional behavior of the code we are writing; we also have to care about the models that we are building, This is an easy thing to forget, as traditional software projects do not have to consider this component.

The process I will now walk you through shows how you can take some base reference data and start to build up some different flavors of tests to give confidence that your model will perform as expected when you deploy it.

We have already introduced how to test automatically with Pytest and GitHub Actions, the good news is that we can just extend this concept to include the testing of some model performance metrics. To do this, you need a few things in place:

  1. Within the action or tests, you need to retrieve the reference data for performing the model validation. This can be done by pulling from a remote data store like an object store or a database, as long as you provide the appropriate credentials. I would suggest storing these as secrets in Github. Here, we will use a dataset generated in place using the sklearn library as a simple example.
  2. You need to retrieve the model or models you wish to test from some location as well. This could be a full-fledged model registry or some other storage mechanism. The same points around access and secrets management as in point 1 apply. Here we will pull a model from the Hugging Face Hub (more on Hugging Face in Chapter 3), but this could equally have been an MLflow Tracking instance or some other tool.
  3. You need to define the tests you want to run and that you are confident will achieve the desired outcome. You do not want to write tests that are far too sensitive and trigger failed builds for spurious reasons, and you also want to try and define tests that are useful for capturing the types of failures you would want to flag.

For point 1, here we grab some data from the sklearn library and make it available to the tests through a pytest fixture:

@pytest.fixture
def test_dataset() -> Union[np.array, np.array]:
    # Load the dataset
    X, y = load_wine(return_X_y=True)
    # create an array of True for 2 and False otherwise
    y = y == 2
    # Train and test split
    X_train, X_test, y_train, y_test = train_test_split(X, y, 
                                                        random_state=42)
    return X_test, y_test

For point 2, I will use the Hugging Face Hub package to retrieve the stored model. As mentioned in the bullets above, you will need to adapt this to whatever model storage mechanism you are accessing. The repository in this case is public so there is no need to store any secrets; if you did need to do this, please use the GitHub Secrets store.

@pytest.fixture
def model() -> sklearn.ensemble._forest.RandomForestClassifier:
    REPO_ID = "electricweegie/mlewp-sklearn-wine"
    FILENAME = "rfc.joblib"
    model = joblib.load(hf_hub_download(REPO_ID, FILENAME))
    return model

Now, we just need to write the tests. Let’s start simple with a test that confirms that the predictions of the model produce the correct object types:

def test_model_inference_types(model, test_dataset):
    assert isinstance(model.predict(test_dataset[0]), np.ndarray)
    assert isinstance(test_dataset[0], np.ndarray)
    assert isinstance(test_dataset[1], np.ndarray)

We can then write a test to assert some specific conditions on the performance of the model on the test dataset is met:

def test_model_performance(model, test_dataset):
    metrics = classification_report(y_true=test_dataset[1], 
                                    y_pred=model.predict(test_dataset[0]),
                                    output_dict=True)
    assert metrics['False']['f1-score'] > 0.95
    assert metrics['False']['precision'] > 0.9
    assert metrics['True']['f1-score'] > 0.8
    assert metrics['True']['precision'] > 0.8

The previous test can be thought of as something like a data-driven unit test and will make sure that if you change something in the model (perhaps you change some feature engineering step in the pipeline or you change a hyperparameter), you will not breach the desired performance criteria. Once these tests have been successfully added to the repo, on the next push, the GitHub action will be triggered and you will see that the model performance test runs successfully.

This means we are performing some continuous model validation as part of our CI/CD process!

Figure 2.16: Successfully executing model validation tests as part of a CI/CD process using GitHub Actions.

More sophisticated tests can be built upon this simple concept, and you can adapt the environment and packages used to suit your needs.

Continuous model training

An important extension of the “continuous” concept in ML engineering is to perform continuous training. The previous section showed how to trigger some ML processes for testing purposes when pushing code; now, we will discuss how to extend this for the case where you want to trigger retraining of the model based on a code change. Later in this book, we will learn a lot about training and retraining ML models based on a variety of different triggers like data or model drift in Chapter 3, From Model to Model Factory, and about how to deploy ML models in general in Chapter 5, Deployment Patterns and Tools. Given this, we will not cover the details of deploying to different targets here but instead show you how to build continuous training steps into your CI/CD pipelines.

This is actually simpler than you probably think. As you have hopefully noticed by now, CI/CD is really all about automating a series of steps, which are triggered upon particular events occurring during the development process. Each of these steps can be very simple or more complex, but fundamentally it is always just other programs we are executing in the specified order upon activating the trigger event.

In this case, since we are concerned with continuous training, we should ask ourselves, when would we want to retrain during code development? Remember that we are ignoring the most obvious cases of retraining on a schedule or upon a drift in model performance or data quality, as these are touched on in later chapters. If we only consider that the code is changing for now, the natural answer is to train only when there is a substantial change to the code.

For example, if a trigger was fired every time we committed our code to version control, this would likely result in a lot of costly compute cycles being used for not much gain, as the ML model will likely not perform very differently in each case. We could instead limit the triggering of retraining to only occur when a pull request is merged into the main branch. In a project, this is an event that signifies a new software feature or functionality has been added and has now been incorporated into the core of the solution.

As a reminder, when building CI/CD in GitHub Actions, you create or edit YAML files contained in the .github folder of your Git repository. If we want to trigger a training process upon a pull request, then we can add something like:

name: Continous Training Example
on: [pull_request]

And then we need to define the steps for pushing the appropriate training script to the target system and running it. First, this would likely require some fetching of access tokens. Let’s assume this is for AWS and that you have loaded your appropriate AWS credentials as GitHub Secrets; for more information, see Chapter 5, Deployment Patterns and Tools. We would then be able to retrieve these in the first step of a deploy-trainer job:

jobs:
  deploy-trainer 
    runs-on: [ubuntu-latest]
    steps:
    - name: Checkout       uses: actions/checkout@v3
    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@v2
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: us-east-2
        role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
        role-external-id: ${{ secrets.AWS_ROLE_EXTERNAL_ID }}
        role-duration-seconds: 1200
        role-session-name: TrainingSession     

You may then want to copy your repository files to a target S3 destination; perhaps they contain modules that the main training script needs to run. You could then do something like this:

    - name: Copy files to target destination
    run: aws s3 sync . s3://<S3-BUCKET-NAME>

And finally, you would want to run some sort of process that uses these files to perform the training. There are so many ways to do this that I have left the specifics out for this example. Many ways for deploying ML processes will be covered in Chapter 5, Deployment Patterns and Tools:

    - name: Run training job
       run: |
        # Your bespoke run commands go in here using the tools of your choice!

And with that, you have all the key pieces you need to run continuous ML model training to complement the other section on continuous model performance testing. This is how you bring the DevOps concept of CI/CD to the world of MLOps!

Summary

This chapter was all about building a solid foundation for future work. We discussed the development steps common to all ML engineering projects, which we called “Discover, Play, Develop, Deploy,” and contrasted this way of thinking against traditional methodologies like CRISP-DM. In particular, we outlined the aim of each of these steps and their desired outputs.

This was followed by a high-level discussion of tooling and a walkthrough of the main setup steps. We set up the tools for developing our code, keeping track of the changes to that code, managing our ML engineering project, and finally, deploying our solutions.

In the rest of the chapter, we went through the details for each of the four steps we outlined previously, with a particular focus on the Develop and Deploy stages. Our discussion covered everything from the pros and cons of Waterfall and Agile development methodologies to environment management and then software development best practices. We explored how to package your ML solution and what deployment infrastructure is available for you to use, and outlined the basics of setting up your DevOps and MLOps workflows. We finished up the chapter by discussing, in some detail, how to apply testing to our ML code, including how to automate this testing as part of CI/CD pipelines. This was then extended into the concepts of continuous model performance testing and continuous model training.

In the next chapter, we will turn our attention to how to build out the software for performing the automated training and retraining of your models using a lot of the techniques we have discussed here.

Join our community on Discord

Join our community’s Discord space for discussion with the author and other readers:

https://packt.link/mle

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • This second edition delves deeper into key machine learning topics, CI/CD, and system design
  • Explore core MLOps practices, such as model management and performance monitoring
  • Build end-to-end examples of deployable ML microservices and pipelines using AWS and open-source tools

Description

The Second Edition of Machine Learning Engineering with Python is the practical guide that MLOps and ML engineers need to build solutions to real-world problems. It will provide you with the skills you need to stay ahead in this rapidly evolving field. The book takes an examples-based approach to help you develop your skills and covers the technical concepts, implementation patterns, and development methodologies you need. You'll explore the key steps of the ML development lifecycle and create your own standardized "model factory" for training and retraining of models. You'll learn to employ concepts like CI/CD and how to detect different types of drift. Get hands-on with the latest in deployment architectures and discover methods for scaling up your solutions. This edition goes deeper in all aspects of ML engineering and MLOps, with emphasis on the latest open-source and cloud-based technologies. This includes a completely revamped approach to advanced pipelining and orchestration techniques. With a new chapter on deep learning, generative AI, and LLMOps, you will learn to use tools like LangChain, PyTorch, and Hugging Face to leverage LLMs for supercharged analysis. You will explore AI assistants like GitHub Copilot to become more productive, then dive deep into the engineering considerations of working with deep learning.

Who is this book for?

This book is designed for MLOps and ML engineers, data scientists, and software developers who want to build robust solutions that use machine learning to solve real-world problems. If you’re not a developer but want to manage or understand the product lifecycle of these systems, you’ll also find this book useful. It assumes a basic knowledge of machine learning concepts and intermediate programming experience in Python. With its focus on practical skills and real-world examples, this book is an essential resource for anyone looking to advance their machine learning engineering career.

What you will learn

  • Plan and manage end-to-end ML development projects
  • Explore deep learning, LLMs, and LLMOps to leverage generative AI
  • Use Python to package your ML tools and scale up your solutions
  • Get to grips with Apache Spark, Kubernetes, and Ray
  • Build and run ML pipelines with Apache Airflow, ZenML, and Kubeflow
  • Detect drift and build retraining mechanisms into your solutions
  • Improve error handling with control flows and vulnerability scanning
  • Host and build ML microservices and batch processes running on AWS
Estimated delivery fee Deliver to Mexico

Standard delivery 10 - 13 business days

Mex$149.95

Premium delivery 3 - 6 business days

Mex$299.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 31, 2023
Length: 462 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837631964
Vendor :
Apache
Category :
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Mexico

Standard delivery 10 - 13 business days

Mex$149.95

Premium delivery 3 - 6 business days

Mex$299.95
(Includes tracking information)

Product Details

Publication date : Aug 31, 2023
Length: 462 pages
Edition : 2nd
Language : English
ISBN-13 : 9781837631964
Vendor :
Apache
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Mex$85 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just Mex$85 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total Mex$ 2,974.97 3,180.97 206.00 saved
Machine Learning with PyTorch and Scikit-Learn
Mex$1128.99
Machine Learning Engineering  with Python
Mex$1025.99
50 Algorithms Every Programmer Should Know
Mex$819.99 Mex$1025.99
Total Mex$ 2,974.97 3,180.97 206.00 saved Stars icon
Banner background image

Table of Contents

11 Chapters
Introduction to ML Engineering Chevron down icon Chevron up icon
The Machine Learning Development Process Chevron down icon Chevron up icon
From Model to Model Factory Chevron down icon Chevron up icon
Packaging Up Chevron down icon Chevron up icon
Deployment Patterns and Tools Chevron down icon Chevron up icon
Scaling Up Chevron down icon Chevron up icon
Deep Learning, Generative AI, and LLMOps Chevron down icon Chevron up icon
Building an Example ML Microservice Chevron down icon Chevron up icon
Building an Extract, Transform, Machine Learning Use Case Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(36 Ratings)
5 star 86.1%
4 star 2.8%
3 star 0%
2 star 5.6%
1 star 5.6%
Filter icon Filter
Top Reviews

Filter reviews by




hawkinflight Sep 11, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have experience as a statistician, data scientist, software engineer, programmer, and I would say a little bit as an ML engineer. In Chapter 1, the author talks about the different roles, so I look forward to reading that to compare against my experience! haha. I don't have any experience using tools to build pipelines, so I am looking forward to reading about that. I like the content and structure of the book, and with only 9 chapters it's not overwhelming. I feel like I could get up to speed really quickly. I have familiarity with many parts, but not everything. I am interested in reading the section about "Choosing a style" - OOP or FP. I am also interested in exploring the "standard ML patterns" - data lakes, microservices, event-based designs and batching. I am interested in learning more about using AWS, so it's great that that's covered. The chapter on scaling is definitely interesting, as is the chapter on LLMs. I have watched interviews with the OpenAI and MSFT folks on the GPT models and I have interacted with ChatGPT. The LLMs look fun to try and the python code in the book looks very easy to read.I like this book a lot. It concisely convers all the points in moving from concept to solution, including what tools can be used. I think it will be a great starting point for me. I can't wait to try it out!
Amazon Verified review Amazon
Ishan Dutta Oct 30, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The width of topics covered along with the code provided makes this a great book! I liked how it started with basics of ML pipelines and went all the way to different LLMOps and so on. The explanation along with the provided diagrams make it easy to understand and retain. I highly recommend this book.
Amazon Verified review Amazon
zeroKelvin Sep 09, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
There are a lot of books out there that walk you through the steps of putting together a complex ML model using ideal data in a closed setting. This is not one of those books. ML engineering with Python is instead a comprehensive guide to the way machine learning works in practice at most companies.The book does a great job of explaining the MLops tools that almost all businesses today rely on to train, deploy, serve, and iterate on models. In my opinion, the concepts in this book are far more valuable than understanding how to use specific ML frameworks to solve problems. Simply understanding that these tools exist, and knowing how they are used will give engineers a leg up, and lead to more revenue generating impact than any gold medal kaggle model could produce on its own.
Amazon Verified review Amazon
Richard Apr 21, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I recently had the pleasure of reviewing "Machine Learning Engineering with Python - Second Edition" by Andrew McMahon. As a NASA data analyst deeply engaged with the operational side of machine learning, I found this book to be a valuable resource for professionals dedicated to mastering MLOps and managing the lifecycle of ML models. Andrew effectively uses practical examples and a thorough examination of contemporary tools and methodologies to advance this field.One of the standout features of this book is McMahon's approach to integrating Python code to clarify the mechanics behind ML algorithms. While I chose not to run the scripts verbatim, I found them incredibly useful as references, enhancing both my existing projects and new initiatives. This method greatly assisted me in understanding the intricacies of ML pipelines and applying these insights across various applications.A suggestion for future readers would be to approach the first chapter last. The book begins with advanced topics that are more comprehensible after navigating through the foundational material presented in subsequent chapters. This adjustment could help flatten the learning curve and not become discouraged at the advanced material.That said, there are areas where the book could improve. The chapter dedicated to generative AI and large language models, for instance, would benefit from additional case studies that demonstrate their practical applications within industry. Moreover, a deeper focus on the ethical considerations of deploying AI systems at scale is necessary, given the increasing importance of ethics in our field.In conclusion, Andrew McMahon’s second edition is a comprehensive guide that I highly recommend to MLOps practitioners, ML engineers, and data scientists. Its depth of content, combined with practical, real-world applications, positions it as a critical read for professionals aiming to stay at the forefront of technology. If you're in the field, this book is undoubtedly a valuable addition to your professional toolkit.
Amazon Verified review Amazon
Rajesh Sathya Kumar Apr 04, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have been reading this book by Andy McMahon and just completed it. The book provided excellent coverage of ML Ops concepts, encompassing a wide range of ideas for building ML-powered apps.The Second Edition of this book also covers concepts from LLM and LLMOps. It also includes deeper content in every chapter. The amount of AI developments from 2021 (First edition) to 2023 (Second edition) is very evident from this book and makes it more exciting about the future.It also covers practical examples and applications built using scikit-learn, Spark, Airflow, Kubernetes, Keras, AWS, etc., and lists the key points discussed in each chapter.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela