Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Applied Machine Learning for Healthcare and Life Sciences using AWS
Applied Machine Learning for Healthcare and Life Sciences using AWS

Applied Machine Learning for Healthcare and Life Sciences using AWS: Transformational AI implementations for biotech, clinical, and healthcare organizations

eBook
$22.99 $33.99
Paperback
$41.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Applied Machine Learning for Healthcare and Life Sciences using AWS

Introducing Machine Learning and the AWS Machine Learning Stack

Applying Machine Learning (ML) technology to solve tangible business problems has become increasingly popular among business and technology leaders. There are a lot of cutting-edge use cases that have utilized ML in a meaningful way and have shown considerable success. For example, computer vision models can allow you to search for what’s in an image by automatically inferring its content, and Natural Language Processing (NLP) models can understand the intent of a conversation and respond automatically while closely mimicking human interactions. As a matter of fact, you may not even know whether the “entity” on the other side of a phone call is an AI bot or a real person!

While AI technology has a lot of potential for success, there is still a limited understanding of this technology. It is usually concentrated in the hands of a few researchers and advanced partitioners who have spent decades in the field. To solve this knowledge parity, a large section of software and information technology firms such as Amazon Web Services (AWS) are committed to creating tools and services that do not require a deep understanding of the underlying ML technology and are still able to achieve positive results. While these tools democratize AI, the conceptual knowledge of AI and ML is critical for its successful application and should not be ignored.

In this chapter, we will get an understanding of ML and how it differs from traditional software. We will get an overview of a typical ML life cycle and also learn about the steps a data scientist needs to perform to deploy an ML model in production. These concepts are fairly generic and can be applied to any domain or organization where ML is utilized.

By the end of this chapter, you will get a good understanding of how AWS helps democratize ML with purpose-built services that are applicable to developers of all skill levels. We will go through the AWS ML stack and go over the different categories of services that will help you understand how the AWS AI/ML services are organized overall. We’ll cover these topics in the following sections:

  • What is ML?
  • Exploring the ML life cycle
  • Introducing ML on AWS

What is ML?

As the name suggests, ML generally refers to the area of computer science that involves making machines learn and make decisions on their own rather than acting on a set of explicit instructions. In this case, think about the machine as the processor of a computer and the instructions as a program written in a particular programming language. The compiler or the interpreter parses the program and derives a set of instructions that the processor can then execute. In this case, the programmer is responsible for making sure the logic they have in their program is correct as the processor will just perform the task as instructed.

For example, let’s assume you want to create a marketing campaign for a new product and want to target the right population to send the email to. To identify this population, you can create a rule in SQL to filter out the right population using a query. We can create rules around age, purchase history, gender, and so on and so forth, and will just process the inputs based on these rules. This is depicted in the following diagram.

Figure 1.1 – A diagram showing the input, logic, and output of a traditional software program

Figure 1.1 – A diagram showing the input, logic, and output of a traditional software program

In the case of ML, we allow the processor to “learn” from past data points about what is correct and incorrect. This process is called training. It then tries to apply that learning to unseen data points to make a decision. This process is known as prediction because it usually involves determining events that haven’t yet happened. We can represent the previous problem as an ML problem in the following way.

Figure 1.2 – A diagram showing how historical data is used to generate predictions using an ML model

Figure 1.2 – A diagram showing how historical data is used to generate predictions using an ML model

As shown in the preceding diagram, we feed the learning algorithm with some known data points in the form of training data. The algorithm then comes up with a model that is now able to predict outputs from unseen data.

While ML models can be highly accurate, it is worth noting that the output of a model provides a probabilistic estimation of an answer instead of deterministic, like in the case of a software program. This means that ML models help us predict the probability of something happening, rather than telling us what will happen for sure. For this reason, it is important to continuously evaluate an ML model’s output and determine whether there is a need for it to be trained again.

Also, downstream consumers of an ML model (client applications) need to keep the probabilistic nature of the output in mind before making decisions based on it. For example, software to compute the sales numbers at the end of each quarter will provide a deterministic figure based on which you can calculate your profit for the quarter. However, an ML model will predict the sales number at the end of a future quarter, based on which you can predict what your profit would look like. The former can be entered in the books or ledger but the latter can be used to get an idea of the future result and take corrective actions if needed.

Now that we have a basic understanding of how to define ML as a concept, let’s look at two broad types of ML.

Supervised versus unsupervised learning

At a high level, ML models can be divided into two categories:

  • Supervised learning model: A supervised ML model is created when the training data has a target variable in place. In other words, the training data contains unique combinations of input features and target variables. This is known as a labeled dataset. The supervised learning model learns the relationship between the target and the input features during the training process. Hence, it is important to have high-quality labeled datasets when training supervised learning models. Examples of supervised learning models include classification and regression models. Figure 1.3 depicts how this would work for a model that recognizes the breed of dogs.
Figure 1.3 – A diagram showing a typical supervised learning prediction workflow

Figure 1.3 – A diagram showing a typical supervised learning prediction workflow

  • Unsupervised learning model: An unsupervised learning model does not depend on the availability of labeled datasets. In other words, unlike its supervised cousin, the unsupervised learning model does not learn the association between the target variable and the input features. Instead, it learns the patterns in the overall data to determine how similar or different each data point is from the others. This is usually done by representing all the data points in parameter space in two or three dimensions and calculating the distance between them. The closer they are to each other, the more similar they are to each other. A common example of an unsupervised learning model is k-means clustering, which divides the input data points into k number of groups or clusters.
    Figure 1.4 – A diagram showing a typical unsupervised learning prediction workflow

Figure 1.4 – A diagram showing a typical unsupervised learning prediction workflow

Now that we have an understanding of the two broad types of ML models, let us review a few key terms and concepts that are commonly used in ML.

ML terminology

There are some key concepts and terminologies specific to ML and it’s important to get a good understanding of these concepts before you go deeper into the subject. These terms will be used repeatedly throughout this book:

  • Algorithm: Algorithms are at the core of an ML workflow. The algorithm defines how the training data is utilized to learn from its representations, and is then used to make predictions of a target variable. An example of an algorithm is linear regression. This algorithm is used to find the best fit line that minimizes the error between the actual and the predicted values of the target. This best fit line can be represented by a linear equation such as y=ax+b. This type of algorithm can be used for problems that can be represented by linear relationships. For example, predicting the height of a person based on their age or predicting the cost of a house based on its square footage.

However, not all problems can be solved using a linear equation because the relationship between the target and the input data points might be non-linear, represented by a curve instead of a straight line. In the case of non-linear regression, the curve is represented by a nonlinear equation y=f(x,c)+b where f(x,c) can be any non-linear function. This type of algorithm can be used for problems that can be represented by non-linear relationships. For example, the prediction of the progression of a disease in a population can be driven by multiple non-linear relationships. An example of a non-linear algorithm is a decision tree. This algorithm aims to learn how to split the data into smaller subsets till the subset is as close in representation to the target variable as possible.

The choice of algorithm to solve a particular problem ultimately depends on multiple factors. It is often recommended to try multiple algorithms and find the one that works best for a particular problem. However, having an intuition of how the algorithm works allows you to narrow it down to a few.

  • Training: Training is the process by which an algorithm learns. In other words, it helps converge on the best fit line or curve based on the input dataset and the target variable. As a result, it is also sometimes referred to as fitting. During the training process, the input data and the target are fed into the algorithm iteratively, in batches. The process tries to determine the coefficients of the final equation that represents the line or the curve with the minimum error when compared to the target variable. During the training process, the input dataset is divided into three different groups: train, validation, and test. The train dataset is the majority of the input data and is used to fit or train the model. The validation dataset is used to evaluate the model performance and, if necessary, tune the model input parameters (also known as hyperparameters) in an iterative way. Finally, the test dataset is the dataset on which the final evaluation of the model is done, which determines whether it can be deployed in production. The process of training and tuning the model is highly iterative. It requires multiple runs and trial and error to determine the best combination of parameters to use in the final model.

The evaluation of the ML model is done by a metric, also known as the evaluation metric, which determines how good the model is. Depending on the choice of the evaluation metric, the training process aims to minimize or maximize the value of the metric. For instance, if the evaluation metric is Mean Squared Error, the goal of the training job is to minimize it. However, if the evaluation metric is accuracy, the goal would be to maximize it. Training is a compute-intensive process and can consume considerable resources.

Figure 1.5 – A diagram showing the steps of a model training workflow

Figure 1.5 – A diagram showing the steps of a model training workflow

  • Model: An ML model is an artifact that results from the training process. In essence, when you train an algorithm with data, it results in a model. A model accepts input parameters and provides the predicted value of the target variable. The input parameters should be exactly the same in structure and format as the training data input parameters. The model can be serialized into a format that can be stored as a file and then deployed into a workflow to generate predictions. The serialized model file stores the weights or coefficients that, when applied to the equation, result in the value of the predicted target. To generate predictions, the model needs to be de-serialized or reconstructed from the saved model file. The idea of saving the model to the disk by serializing it allows for model portability, a term typically used by data scientists to denote interchangeability between frameworks when it comes to data science. Common ML frameworks such as PyTorch, scikit-learn, and TensorFlow all support serializing model files into standard formats, which allows you to standardize your model registries and also use them interchangeably if needed.
  • Inference: Inference is the process of generating predictions from the trained model. Hence, this step is also known as predicting. The model that has been trained on past data is now exposed to unseen data to generate the value of the target variable. As described earlier, the model resulting from the training process is already evaluated using a metric on a test dataset, which is a subset of the input data. However, this does not guarantee that the model will perform well on unseen data when deployed. As a result, prediction results are continuously monitored and compared against the ground truth (actual) values.

There are various ways in which the model results are monitored and evaluated in production. One common method utilizes humans to evaluate certain prediction results that are suspicious. This method of validation is also known as human-in-the-loop. In this method, the model results with lower confidence (usually denoted by a confidence score) are routed to a human to determine if the output is correct or not. The human can then override the model result if needed before sending it to the downstream system. This method, while extremely useful, has a drawback. Some ML models do not have the ground truth data available until the event actually happens in the future. For instance, if the model predicts a patient is going to require a kidney transplant, a human may not be able to validate that output of the model until the transplant actually happens (or not). In such cases, the human-in-the-loop method of validation does not work. To account for such limitations, the method of monitoring drift in real-world data compared to the training data is utilized to determine the effectiveness of the model predictions. If the real-world data is very different from the training data, the chances of predictions being correct are minimal and hence, it may require retraining the model.

Inferences from an ML model can be executed as an asynchronous batch process or as a synchronous real-time process. An asynchronous process is great for workloads that run in batches. For example, calculating the risk score of loan defaults across all monthly loan applications at the end of the month. This risk score is generated or updated once a month for a large batch of individuals who applied for a loan. As a result, the model does not need to serve requests 24/7 and is only used at scheduled times. Synchronous or real-time inference is when the model serves out inference as a response to each request 24/7 in real time. In this case, the model needs to be hosted on a highly available infrastructure that remains up and running at all times and also adheres to the latency requirements of the downstream application. For example, a weather forecast application that continuously updates the forecast conditions based on the predictions from a model needs to generate predictions 24/7 in real time.

Now that we have a good understanding of what ML is and the key terminologies associated with it, let’s look at the process of building the model in more detail.

Exploring the ML life cycle

The ML life cycle refers to the various stages in the conceptualization, design, development, and deployment of an ML model. These stages in the ML model development process consist of a few key steps that help data scientists come up with the best possible outcome for the problem at hand. These steps are usually repeatable and iterative and are combined into a pipeline commonly known as the ML pipeline. An ideal ML pipeline is automated and repeatable so it can be deployed and maintained as a production pipeline. Here are the common stages of an ML life cycle.

Figure 1.6 – A diagram showing the steps of an ML life cycle

Figure 1.6 – A diagram showing the steps of an ML life cycle

Figure 1.6 shows the various steps of the ML life cycle. It starts with having a business understanding of the problem and ends with a deployed model. The iterative steps such as data preparation and model training are denoted by loops to depict that the data scientists would perform those steps repeatedly until they are satisfied with the results. Let us now look at the steps in more detail.

Problem definition

A common mistake is to think ML can solve any problem! Problem definition is key to determining whether ML can be utilized to solve it. In this step, data scientists work with business stakeholders to find out whether the problem satisfies the key tenets of a good ML problem:

  • Predictive element: During the ML problem definition, data scientists try to determine whether the problem has a predictive element. It may well be the case that the output being requested can be modeled as a rule that is calculated using existing data instead of creating a model to predict it.

For example, let us take into consideration the problem of health insurance claim fraud identification. There are some tell-tale signs of a claim being fraudulent that are derivable from the existing claims database using data transformations and analytical metrics. For example, verifying whether it’s a duplicate claim, whether the claim amount is unusually high, whether the reason for the claim matches the patient demographic or history, and so on. These attributes can help determine the high-risk claim transactions, which can then be flagged. For this particular problem, there is no need for an ML model to flag such claim transactions as the rules applied to existing claim transaction data are enough to achieve what is needed. On the other hand, if the solution requires a deeper analysis of multiple sources of data and looks at patterns across a large volume of such transactions, it may not be a good candidate for rules or analytical metrics. Applying conventional analytics to large volumes of heterogeneous datasets can result in extremely complicated analytical queries that are hard to debug and maintain. Moreover, the processing of rules on these large volumes of data can be compute-intensive and may become a bottleneck for the timely identification of fraudulent claims. In such cases, applying ML can be beneficial. A model can look at features from different sources of data and learn how they are associated with the target variable (fraud versus no fraud). It can then be used to generate a risk score for each new claim.

It is important to talk to key business stakeholders to understand the different factors that go into determining whether a claim is fraudulent or not. In the process, data scientists document a list of input features that can be used in the ML model. These factors help in the overall determination of the predictive element of the problem statement.

  • Availability of dataset: Once the problem is determined to be a good candidate for ML, the next important thing to check is the availability of a high-quality labeled dataset. We cannot train models without the available data. The dataset should also be clean, with no missing values, and be evenly distributed across all features and values. It should have mapping to the target variable and the target itself should be evenly distributed across the dataset. This is obviously the ideal condition and real-world scenarios may be far from ideal. However, the closer we can get to this ideal state of data, the easier it is to produce a highly accurate model from it. In some cases, data scientists may recommend to the business they collect more data containing more examples of a certain type or even more features before starting to experiment with ML methods. In other cases, labeling and annotation of the raw data by subject matter experts (SMEs) may be needed. This is a time-consuming step and may require multiple rounds of discussions with the SMEs, business stakeholders, and data scientists before arriving at an appropriate dataset to begin the ML modeling process. It is worth the time, as utilizing the right dataset ensures the success of the ML project.
  • Appetite for experimentation: In a few scenarios, it is important to highlight the fact that data science is a process of experimentation and the chances of it being successful are not always high. In a software development exercise, the work involved in each phase of requirements gathering, development, testing, and deployment can be largely predictable and can be used to accurately estimate the time it will take to complete the project. In an ML project, that may be difficult to determine from the outset. Steps such as data gathering and training the tuning hyperparameters are highly iterative and it could take a long time to come up with the best model. In some cases where the problem and dataset are well known, it may be easier to estimate the time as the results have been proven. However, the time taken to solve novel problems using ML methods could be difficult to determine. It is therefore recommended that the key stakeholders are aware of this and make sure the business has an appetite for experimentation.

Data processing and feature engineering

Before data can be fed into an algorithm for training a model, it needs to be transformed, cleaned, and formatted in a way that can be understood by ML algorithms. For example, raw data may have missing values and may not be standardized across all columns. It may also need transformations to create new derived columns or drop a few columns that may not be needed for ML. Once these data processing steps are complete, the data needs to be made suitable for ML algorithms for training. As you know by now, algorithms are representative of a mathematical equation that accepts the input values of the training datasets and tries to learn its association with the target. Therefore, it cannot accept non-numeric values. In a typical training dataset, you may have numeric, categorical, or text values that have to be appropriately engineered to make them appropriate for training. Some of the common techniques of feature engineering are as follows:

  • Scaling: This is a technique by which a feature that may vary a lot across the dataset can be represented at a common scale. This allows the final model to be less sensitive to the variations in the feature.
  • Standardizing: This technique allows the feature distribution to have a mean value of zero and a standard deviation of one.
  • Binning: This approach allows for granular numerical values to be grouped into a set, resulting in categorical variables. For example, people above 60 years of age are old, between 18 and 60 are adults, and below 18 are young.
  • Label encoding: This technique is used to convert categorical features into numeric features by associating a numerical value to each unique value of the categorical variable. For example, if a feature named color consists of three unique values – Blue, Black, and Red – label encoders can associate a unique number with each of those colors, such as Blue=1, Black=2, and Red=3.
  • One-hot encoding: This is another technique for encoding categorical variables. Instead of assigning a unique number to each value of a categorical feature, this technique converts each feature into a column in the dataset and assigns it a 1 or 0. Here is an example:

    Price

    Model

    1000

    iPhone

    800

    Samsung

    900

    Sony

    700

    Motorola

Table 1.1 – A table showing data about cell phone models and their price

Applying one-hot encoding to the preceding table will result in the following structure.

Price

iPhone

Samsung

Sony

Motorola

1000

1

0

0

0

800

0

1

0

0

900

0

0

1

0

700

0

0

0

1

Table 1.2 – A table showing the results of one-hot encoding applied to the table in Figure 1.7

The resulting table is sparse in nature and consists of numeric features that can be fed into an ML algorithm for training.

The data processing and feature engineering steps you ultimately apply depend on your source data. We will look at some of these techniques applied to datasets in subsequent chapters where we will see examples of building, training, and deploying ML models with different datasets.

Model training and deployment

Once the features have been engineered and are ready, it is time to enter into the training and deployment phase. As mentioned earlier, it’s a highly repetitive phase of the ML life cycle where the training data is fed into the algorithm to come up with the best fit model. This process involves analyzing the output of the training metrics and tweaking the input features and/or the hyperparameters to achieve a better model. Tuning the hyperparameters of a model is driven by intuition and experience. Experienced data scientists select the initial parameters based on their knowledge of solving similar problems using the algorithm of choice and can come up with the best fit model faster. However, the trial-and-error process can be time-consuming for a new data scientist who is starting off with a random search of the parameters. This process of identifying the best hyperparameters of a model is known as hyperparameter tuning.

The trained model is then deployed typically as a REST API that can be invoked for generating predictions. It’s important to note that training and deployment is a continuous process in an ML life cycle. As discussed earlier, models that perform well in the training phase may degrade in performance in production over a period of time and may require retraining. It is also important to keep training the model at regular intervals with newly available real-world data to make sure it is able to predict accurately in all variations of production data. For this reason, ML engineers prefer to create a repeatable ML pipeline that continuously trains, tunes, and deploys newer versions of models as needed. This process is known as ML Operations, or simply MLOps, and the pipeline that performs these tasks is known as an MLOps pipeline.

Introducing ML on AWS

AWS puts ML in the hands of every developer, irrespective of their skill level and expertise, so that businesses can adopt the technology quickly and effectively. AWS focuses on removing the undifferentiated heavy lifting in the process of building ML models such as the management of the underlying infrastructure, the scaling of the training and inference jobs, and ensuring high availability of the models. It provides developers with a variety of compute instances and containerized environments to choose from that are purpose-built for the accelerated and distributed computing needed for high-scale ML jobs. AWS has a broad and deep set of ML capabilities for builders that can be connected together, like Lego pieces, to create intelligent applications.

AWS ML services cover the full life cycle of an ML pipeline from data annotation/labeling, data cleansing, feature engineering, model training, deployment, and monitoring. It has purpose-built services for problems in computer vision, natural language processing, forecasting, recommendation engines, and fraud detection, to name a few. It also has options for automatic model creation and no-/low-code options for creating ML models. The AWS ML services are organized into three layers also known as the AWS machine learning stack.

Introducing the AWS ML stack

The following diagram represents the version of the AWS AI/ML services stack as of April 2022.

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

Figure 1.7 – A diagram depicting the AWS ML stack as of April 2022

The stack can be used by expert practitioners who want to develop a project within the framework of their choice; data scientists who want to use the end-to-end capabilities of SageMaker; business analysts who can build their own model using Canvas; or application developers with no previous ML skills who can add intelligence to their applications with the help of API calls. The following are the three layers of the AWS AI/ML stack:

  • AI services layer: The AI services layer of the AWS ML stack is the topmost layer of the stack. It consists of services that require minimal knowledge of ML. Sometimes, it comes with a pre-trained model that can be just invoked using APIs from the AWS SDK, the AWS CLI, or the console. In other cases, the services allow you to customize the model by providing your own labeled training dataset so the responses are more appropriate for the problem at hand. In any case, the AI services layer of the AWS AI/ML stack is focused on ease of use. The services are designed for specialized applications in industrial settings, search, business processes, and healthcare. It also comes with a core set of capabilities in the areas of speech, chatbots, vision, and text and documents.
  • ML services layer: The ML services layer is the middle layer of the AWS AI/ML stack. It provides tools for data scientists to perform all the steps of the ML life cycle, such as data cleansing, feature engineering, model training, deployment, and monitoring. It is driven by the core ML platform of AWS known as Amazon SageMaker. SageMaker provides the ability to build a modular containerized environment that interfaces with the AWS compute and storage services seamlessly. It provides its own SDK that has APIs to interact with the service. It removes the complexity from each step of the ML workflow by providing simple-to-use modular capabilities with a choice of deployment architectures and patterns to suit virtually any ML application. It also contains MLOps capabilities to create a reproducible ML pipeline that is easy to maintain and scale. The ML services layer is suited for data scientists who build and train their own models and maintain large-scale models in production environments.
  • ML fameworks and the infrastructure layer: The ML frameworks and infrastructure layer is the bottom layer of the AWS AI/ML stack. The services in this layer are for expert practitioners who can develop using the framework of their choice. It provides a choice for developers and scientists to run their workloads as a managed experience in Amazon SageMaker or run their workloads in a self-managed environment on AWS Deep Learning, Amazon machine images (AMIs), and AWS Deep Learning Containers. The AWS Deep Learning AMI and containers are fully configured with the latest versions of the most popular deep learning frameworks and tools – including PyTorch, MXNet, and TensorFlow. As part of this layer, AWS provides a broad and deep portfolio of compute, networking, and storage infrastructure services with a choice of processors and accelerators to meet your unique performance and budget needs for ML.

Now that we have a good understanding of ML and the AWS ML stack, it is a good time to re-read sections that may not be entirely clear. Also, the chapter introduces concepts of ML, but if you want to dive deeper into any of the concepts touched upon in this chapter, there are several trusted online resources for you to refer to. Let us now summarize the lessons from this chapter and see what’s ahead.

Summary

In this chapter, you got an overview of the basic concepts of ML. You went over the definition of ML and how it differs from typical software. You also learned about important terminologies and concepts that are heavily used in the context of ML. The chapter also covered the important steps of the ML life cycle, which can be combined together to create an end-to-end ML pipeline to deploy models in production. Lastly, you got an introduction to the AWS ML stack and how the AWS AI/ML services are organized.

In Chapter 2, Exploring Key AWS Machine Learning Services for Healthcare and Life Sciences, we will dive into the details of some of the critical AWS services that allow healthcare and life sciences customers to build, train, and deploy ML models for solving important problems in the industry. We will cover those problems in detail in the subsequent chapters of this book.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn about healthcare industry challenges and how machine learning can solve them
  • Explore AWS machine learning services and their applications in healthcare and life sciences
  • Discover practical coding instructions to implement machine learning for healthcare and life sciences

Description

While machine learning is not new, it's only now that we are beginning to uncover its true potential in the healthcare and life sciences industry. The availability of real-world datasets and access to better compute resources have helped researchers invent applications that utilize known AI techniques in every segment of this industry, such as providers, payers, drug discovery, and genomics. This book starts by summarizing the introductory concepts of machine learning and AWS machine learning services. You’ll then go through chapters dedicated to each segment of the healthcare and life sciences industry. Each of these chapters has three key purposes -- First, to introduce each segment of the industry, its challenges, and the applications of machine learning relevant to that segment. Second, to help you get to grips with the features of the services available in the AWS machine learning stack like Amazon SageMaker and Amazon Comprehend Medical. Third, to enable you to apply your new skills to create an ML-driven solution to solve problems particular to that segment. The concluding chapters outline future industry trends and applications. By the end of this book, you’ll be aware of key challenges faced in applying AI to healthcare and life sciences industry and learn how to address those challenges with confidence.

Who is this book for?

This book is specifically tailored toward technology decision-makers, data scientists, machine learning engineers, and anyone who works in the data engineering role in healthcare and life sciences organizations. Whether you want to apply machine learning to overcome common challenges in the healthcare and life science industry or are looking to understand the broader industry AI trends and landscape, this book is for you. This book is filled with hands-on examples for you to try as you learn about new AWS AI concepts.

What you will learn

  • Explore the healthcare and life sciences industry
  • Find out about the key applications of AI in different industry segments
  • Apply AI to medical images, clinical notes, and patient data
  • Discover security, privacy, fairness, and explainability best practices
  • Explore the AWS ML stack and key AI services for the industry
  • Develop practical ML skills using code and AWS services
  • Discover all about industry regulatory requirements
Estimated delivery fee Deliver to Colombia

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Nov 25, 2022
Length: 224 pages
Edition : 1st
Language : English
ISBN-13 : 9781804610213
Category :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Colombia

Standard delivery 10 - 13 business days

$19.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Publication date : Nov 25, 2022
Length: 224 pages
Edition : 1st
Language : English
ISBN-13 : 9781804610213
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 149.97
Machine Learning with PyTorch and Scikit-Learn
$54.99
Applied Machine Learning for Healthcare and Life Sciences using AWS
$41.99
Modern Time Series Forecasting with Python
$52.99
Total $ 149.97 Stars icon
Banner background image

Table of Contents

18 Chapters
Part 1: Introduction to Machine Learning on AWS Chevron down icon Chevron up icon
Chapter 1: Introducing Machine Learning and the AWS Machine Learning Stack Chevron down icon Chevron up icon
Chapter 2: Exploring Key AWS Machine Learning Services for Healthcare and Life Sciences Chevron down icon Chevron up icon
Part 2: Machine Learning Applications in the Healthcare Industry Chevron down icon Chevron up icon
Chapter 3: Machine Learning for Patient Risk Stratification Chevron down icon Chevron up icon
Chapter 4: Using Machine Learning to Improve Operational Efficiency for Healthcare Providers Chevron down icon Chevron up icon
Chapter 5: Implementing Machine Learning for Healthcare Payors Chevron down icon Chevron up icon
Chapter 6: Implementing Machine Learning for Medical Devices and Radiology Images Chevron down icon Chevron up icon
Part 3: Machine Learning Applications in the Life Sciences Industry Chevron down icon Chevron up icon
Chapter 7: Applying Machine Learning to Genomics Chevron down icon Chevron up icon
Chapter 8: Applying Machine Learning to Molecular Data Chevron down icon Chevron up icon
Chapter 9: Applying Machine Learning to Clinical Trials and Pharmacovigilance Chevron down icon Chevron up icon
Chapter 10: Utilizing Machine Learning in the Pharmaceutical Supply Chain Chevron down icon Chevron up icon
Part 4: Challenges and the Future of AI in Healthcare and Life Sciences Chevron down icon Chevron up icon
Chapter 11: Understanding Common Industry Challenges and Solutions Chevron down icon Chevron up icon
Chapter 12: Understanding Current Industry Trends and Future Applications Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(14 Ratings)
5 star 92.9%
4 star 7.1%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Milad Rezaeighale Dec 28, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a good book if you want to know about machine learning on AWS and health use cases and problems that can be solved on top of AWS services.
Amazon Verified review Amazon
MarcusMann74 Aug 28, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
With the emergence of AI/ML/GenAI it is difficult to distinguish between hype and facts. This book written by an experienced industry veteran, provides a great summary of which architectures and technology will drive value to accelerate the rate of change in Life Sciences. I highly recommend it if you are in the industry and need a guidebook to help navigate the best uses of AI/ML in HCLS.
Amazon Verified review Amazon
Amazon Customer Feb 14, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is not very common to find a book like this, it has a very good mixture of AWS and Healthcare/Life science. This adds a lot of value to AWS services in the field of Healthcare.
Amazon Verified review Amazon
Megha S. Feb 13, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I am in healthcare industry and I read this book to apply artificial intelligence and machine learning in my work! I highly recommend this book to all health care and biotech people especially strengthen in the pharmaceutical supply chain, and learning in clinical trials and pharmacovigilance.
Amazon Verified review Amazon
Yiqiao Yin Feb 13, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This remarkable title has truly captivated my attention, and the author's high-profile background and prestigious affiliations only add to its allure. As a reader, I have gained invaluable knowledge from the author's comprehensive coverage of AWS services and modules, which has greatly benefited my understanding of the subject matter. I would wholeheartedly recommend this book to other data scientists within my network, particularly those operating in the healthcare and life sciences industries.Throughout the book, the author expertly covers a range of concepts related to machine learning and AWS machine learning services. The content is organized into several chapters, each with three primary objectives. Firstly, each chapter introduces a specific segment of the industry, providing insight into the unique challenges faced by that segment and how machine learning can be applied to address these issues. Secondly, the author delves into the features and functionalities of various AWS machine learning services, such as Amazon SageMaker and Amazon Comprehend Medical, giving readers a solid foundation in these essential tools. Lastly, the book encourages readers to apply their newfound skills to develop machine learning-driven solutions tailored to the specific problems encountered within each industry segment.In essence, this title offers a comprehensive and enlightening exploration of machine learning concepts and their practical applications within AWS services. Its structured approach allows readers to fully understand the unique challenges and opportunities within various industry segments, ultimately empowering them to develop innovative solutions to complex problems.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela