Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Federated Learning with Python
Federated Learning with Python

Federated Learning with Python: Design and implement a federated learning system and develop applications using existing frameworks

Arrow left icon
Profile Icon Kiyoshi Nakayama, PhD Profile Icon George Jeno
Arrow right icon
$25.99 $37.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (12 Ratings)
eBook Oct 2022 326 pages 1st Edition
eBook
$25.99 $37.99
Paperback
$46.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Kiyoshi Nakayama, PhD Profile Icon George Jeno
Arrow right icon
$25.99 $37.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9 (12 Ratings)
eBook Oct 2022 326 pages 1st Edition
eBook
$25.99 $37.99
Paperback
$46.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$25.99 $37.99
Paperback
$46.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Federated Learning with Python

Challenges in Big Data and Traditional AI

In this introductory chapter, why federated learning (FL) is going to be a key technology in the 2020s is explained in detail. You will learn what big data is and how it has been problematic from the perspectives of data privacy, model bias, and drift. A solid understanding of the necessity of such issues and solutions for them will motivate you to embark on a challenging journey to acquire relevant knowledge and skills, using the following chapters to chart the mastery of FL. After reading this chapter, it will become obvious that there is a massive paradigm shift in artificial intelligence (AI) and machine learning (ML), which is happening due to public and business concerns over the current reliance on big data-oriented systems. Without further ado, let us depart!

In this chapter, we will cover the following topics:

  • Understanding the nature of big data
  • Data privacy as a bottleneck
  • Impacts of training data and model bias
  • Model drift and performance degradation
  • FL as the main solution for data problems

Understanding the nature of big data

In the 2021 Enterprise Trends in Machine Learning survey conducted on 403 business leaders by Algorithmia, 76% of enterprises prioritized AI and ML over other IT initiatives. The global pandemic of COVID-19 necessitated some of those companies to hasten the development of AI and ML, as their chief information officers (CIOs) recounted, and 83% of the surveyed organizations increased their budget for AI and ML year-over-year (YoY), with a quarter of them doing so by over 50%. Customer experience improvement and process automation, either through increased revenue or reduced costs, were the main drivers of the change. Other studies, including KPMG’s latest report, Thriving in an AI World, essentially tell the same story.

The ongoing spree of AI and ML development, epitomized by deep learning (DL), was made possible by the advent of big data in the last decade. Provided with Apache’s open source software utilities Hadoop and Spark, as well as cloud computing services such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, organizations in both the private and public sectors can solve problems by handling a massive amount of data in ways unthinkable theretofore. Companies and bureaus no longer have to be overcautious in developing data analytic models and designing data warehouses upfront so that relevant data will be stored in appropriate formats. Instead, they can simply cascade available raw data into their data lake, expecting that their data scientists will find out variables that are valuable down the line by checking their correlation with one another.

Big data might seem to be the ultimate solution to a wide range of problems, but as we will see in the following sections, it has several inherent issues. To clearly understand what the issues with big data could be, let’s examine what exactly big data is first.

Definition of big data

Big data represents vast sets of information. This information is now growing at an exponential rate. Big data has become so large now as humans now produce two quintillion bytes of data daily. Thus, it is getting to be quite difficult to process big data very efficiently for an ML purpose with existing traditional tools for data management. Three Vs are commonly used to define the characteristics of big data, as presented here:

  • Volume: Data from various sources such as business transactions, Internet of Things (IoT) devices, social media, industrial equipment, videos, and so on, contribute to the sheer amount of data.
  • Velocity: Data speed is also an essential characteristic of big data. Often, data is needed in real time or near real time.
  • Variety: Data comes in all formats, such as numeric data, text documents, images, videos, emails, audio, financial transactions, and so on.

The following screenshot describes the intersection of the three Vs as big data:

Figure 1.1 – Big data’s three Vs

Figure 1.1 – Big data’s three Vs

In 1880, the United States (US) Census Bureau gathered a lot of data from the census and estimated that it would take 8 years to process that amount of data. The following year, a man named Herman Hollerith invented the Hollerith tabulating machine, which reduced the work needed to process the data. The first data center was built in 1965 to store fingerprint data and tax information.

Big data now

The introduction of data lakes as a concept played a key role in ushering in the massive scales we see when working with data today. Data lakes give companies total freedom to store arbitrary types of data observed during operation, removing a restriction that otherwise would have prevented the company from collecting some data that ends up being necessary in the future. While this freedom allows data lakes to maintain the maximum potential of the data generated by the company, it also can lead to a key problem—complacency in understanding the collected data. The ease of storing different types of data in an unstructured manner can actually lead to a store now, sort out later mentality. The true difficulty of working with unstructured data actually stems from its processing; thus, the delayed processing mentality has the potential to lead to data lakes that have become highly cumbersome to sift through and work with due to unrestricted growth from the collection of data.

Raw data is only as valuable as the models and insights that can be derived from it. The central data lake approach leads to cases where derivation from the data is limited by a lack of structure, leading to issues ranging from storage inefficiency to actual intelligence inefficiency due to extraction difficulties. On the other side, approaches preceding data lakes suffered from a simple lack of access to the amount of data potentially available. The fact that FL allows for both classes of problems to be avoided is the key driving support for FL as the vehicle that will advance big data into the collective intelligence era.

This claim is substantiated by the fact that FL flips the big data flow from collect → derive intelligence to derive intelligence → collect. For humans, intelligence can be thought of as the condensed form of large swaths of experience. In a similar way, the derivation of intelligence at the source of the generated data— done by training a model on the data at the source location—succinctly summarizes the data in a format that maximizes accessibility for practical applications. The late collection step of FL leads to the creation of the desired global intelligence with maximal data access and data storage efficiency. Even cases with partial usage of the generated data sources can still greatly benefit from the joint storage of intelligence and data by greatly reducing the number of data formats entering the residual data lake.

Triple-A mindset for big data

While many definitions have been proposed with emphasis on different aspects, Oxford professor Viktor Mayer-Schönberger and The Economist senior editor Kenneth Cukier brilliantly elucidated the nature of big data in their 2013 international bestseller, Big Data: A Revolution That Will Transform How We Live, Work, and Think?. It is not about how big the data in a server is; big data is about three major shifts of a mindset that are interlinked and hence reinforce one another. Their argument boils down to what we can summarize and call the Triple-A mindset for big data, which consists of an abundance of observations, acceptance of messiness, and ambivalence of causality. Let’s take a look at them one by one.

Abundance of observations

Big data doesn’t have to be big in terms of columns and rows or file size. Big data has a number of observations, commonly denominated as n, close or equal to the size of the population of interest. In traditional statistics, collecting data from the entire population—for example, people interested in fitness in New York—was not possible or feasible, and researchers would have to randomly select a sample from the population—for example, 1,000 people interested in fitness in New York. Random sampling is often difficult to perform and so is justifying the narrow focus on particular subgroups: surveying people around gyms would miss those who run in parks and practice yoga at home, and why gym goers rather than runners and yoga fans? Thanks to the development and sophistication of Information and Communications Technology (ICT) systems, however, researchers today can access the data of approximately all of the population through multiple sources—for example, records of Google searches about fitness. This paradigm of abundance or n = all is advantageous since what the data says can be interpreted as a true statement about the population, whereas the old method could only infer such truth with a significant level of confidence expressed in a p-value, typically supposed to be under 0.05. Small data provides statistics; big data proves states.

Acceptance of messiness

Big data tends to be messy. If we use Google search data as a proxy for someone’s interest—for example—we could mistakenly attribute some of the searches made by their family or friends on their devices to them, and the estimated interest will be inaccurate to the degree of the ratio of such unowned-device searches. In some devices, a significant amount of searches may be made by multiple users, such as shared computers at an office or a smartphone belonging to a child whose younger siblings are yet to own one. Otherwise, people may search for words that pop up in a conversation with someone else, rather than self-talk, which does not necessarily reflect their own interests. In studies using traditional methods, researchers would have to make sure that such devices are not included in their sample data because the mess can affect the quality of inference significantly, as the number of observations would be small. This is not the case in big data studies. Researchers would be willing to accept the mess as its effect diminishes proportionally as the number of observations becomes large enough toward n = all. In most devices, Google searches would be made by the owner autonomously most of the time, and the impact of searches in other contexts would not matter.

Ambivalence of causality

Big data is often used to study correlation but not causation—in other words, it usually does not tell why but only what. For many practical questions, correlation alone can provide the answer. Mayer-Schönberger and Cukier give several examples in the Big Data: A Revolution That Will Transform How We Live, Work, and Think book, among which is Fair Isaac Corporation’s Medication Adherence Score established in 2011. In an era where people’s behavioral patterns are datafied, collecting n = all observations for the variables of interest is possible, and the correlation found among them is powerful enough to direct our decision-making. There is no need to know people’s psychological scores of consistency or conformity that cause their adherence to medical prescriptions; by looking at how they behave in other aspects of life, we can predict whether they will follow the prescription or not.

By embracing the triple mindset of abundance, acceptance, and ambivalence, enterprises and governments have generated intelligence across tasks from pricing services to recommending products, optimizing transportation routes, and identifying crime suspects. Nevertheless, that mindset has been challenged in recent years, as shown in the following sections. First, let’s glimpse into how the abundance of observations often taken for granted is currently under pressure.

Data privacy as a bottleneck

FL is often said to be one of the most popular privacy-preserving AI technologies because private data does not have to be collected or shared with third-party entities to generate high-quality intelligence. Therefore, in this section, we discuss the data privacy that has been a bottleneck that FL tries to resolve to create high-quality intelligence.

What is data privacy? In May 2021, HCA Healthcare announced that the company had struck a deal to share its patient records and real-time medical data with Google. Various media quickly responded by warning the public about the deal, as Google had been mentioned for its Project Nightingale where the tech giant allegedly exploited the sensitive data of millions of American patients. Given above 80% of the public believes that the potential risks in data collection by companies outweigh the benefits, according to a 2019 poll by Pew Research Center, data sharing projects of such a scale are naturally seen as a threat to people’s data privacy.

Data privacy, also known as information privacy, is the right of individuals to control how their personal information is used, which mandates third parties to handle, process, store, and use such information properly in accordance with the law. It is often confused with data security, which ensures that data is accurate, reliable, and accessible only to authorized users. In the case of Google accounts, data privacy regulates how the company can use the account holders’ information, while data security requires them to deploy measures such as password protection and 2-step verification. In explaining these two concepts, the data privacy managers use an analogy of a window for security and a curtain for privacy: data security is a prerequisite for data privacy. Put together, they comprise data protection, as shown in the following diagram:

Figure 1.2 – Data security versus data privacy

Figure 1.2 – Data security versus data privacy

We can see from the preceding diagram that while data security limits who can access data, data privacy limits what can be in the data. Understanding this distinction is very important because data privacy can multiply the consequences of failures in data security. Let’s look into how.

Risks in handling private data

Failing in data protection is costly. According to IBM’s Cost of a Data Breach Report 2021, the global average cost of a data breach in the year marked US dollars (USD) $4.24 million, which is considerably higher than $3.86 million a year earlier and is the highest amount in the 17-year history of the report; an increased number of people working remotely in the aftermath of the COVID-19 outbreak is considered a major reason for the spike. The top five industries for average total cost are healthcare, finance, pharmaceuticals, technology, and energy. Nearly half of breaches in the year included customer personally identifiable information (PII), which costs $180 per record on average. Once customer PII is breached, negativities such as system downtime during the response, loss of customers, need for acquiring new customers, reputation losses, and diminished goodwill ensue; hence, the hefty cost.

The IBM study also found that failing to comply with regulations for data protection was top among the factors that amplify data breach costs (https://www.ibm.com/downloads/cas/ojdvqgry).

Increased data protection regulations

As technology advances, the need to protect customer data has become more critical. Consumers require and expect privacy protection during every transaction; many simple activities can risk personal data, whether online banking or using a phone app.

Governments worldwide were initially slow to react by creating laws and regulations to protect personal data from identity theft, cybercrime, and data privacy violations. However, times are now changing as data protection laws are beginning to take shape globally.

There are several drivers for the increase in regulations. These include the growth of enormous amounts of data, and we need more data security and privacy to protect users from nefarious activities such as identity theft.

Let’s look at some of the measures taken toward data privacy in the following sub-sections.

General Data Protection Regulation (GDPR)

The General Data Protection Regulation (GDPR) by European Union is regarded as the first data protection regulation in the modern data economy and was emulated by many countries to craft their own. GDPR was proposed in 2012, adapted by the EU Council and Parliament in 2016, and enforced in May 2018. It superseded the Data Protection Directive that had been adopted in 1995.

What makes GDPR epoch-making is its stress on the protection of PII, including people’s names, locations, racial or ethnic origin, political or sexual orientation, religious beliefs, association memberships, and genetic/biometric/health information. Organizations and individuals both in and outside the EU have to follow the regulation when dealing with the personal data of EU residents. There are seven principles of GDPR, among which six were inherited from the Data Protection Directive; the new principle is accountability, which demands data users maintain documentation about the purpose and procedure of personal data usage.

GDPR has shown the public what the consequences of its violation can be. Depending on the severity of non-compliance, the GDPR fine can go from 2% of global annual turnover or €10 million, whichever is higher, or 4% of global annual turnover or €20 million, whichever is higher. In May 2018, thousands of Europeans filed a complaint against Amazon.com Inc. through the French organization La Quadrature du Net, also known as Squaring the Net in English, accusing the company of using its advertisement targeting system without customer consent. After 3 years of investigation, Luxembourg’s National Commission for Data Protection (CNDP) made headlines around the world: it issued Amazon a €746 million fine. Similarly, WhatsApp was fined by Ireland’s Data Protection Commission in September 2021 for GDPR infringement; again, the investigation had taken 3 years, and the fine amounted to €225 million.

Currently, in the US, a majority of states have privacy protections in place or soon will. Additionally, several states have strengthened existing regulations, such as California, Colorado, and Virginia. Let’s look at each to get an idea of these changes.

California Consumer Privacy Act (CCPA)

The state of California followed suit. The California Consumer Privacy Act (CCPA) became effective on January 1, 2020. As the name suggests, the aim of the regulation is to protect consumers’ PII just as GDPR does. Compared to GDPR, the scope of the CCPA is significantly limited. The CCPA is applicable only to for-profit organizations that collect data from over 50,000 points (residents, households, or devices in the state) in a year, generate annual revenue over $25 million, or make half of their annual revenue by selling such information. However, CCPA infringement can be much more costly than GDPR infringement since the former has no ceiling for its fine ($2,500 per record for each unintentional violation; $7,500 per record for each intentional violation).

Colorado Privacy Act (CPA)

Under the Colorado Privacy Act (CPA), starting July 1, 2024, data collectors and controllers will have to follow universal opt-outs that users have selected for generating targeted advertising and sales. This rule protects residents in Colorado from targeted sales and advertising as well as certain types of profiling.

Virginia Consumer Data Protection Act (CDPA)

Virginia’s Consumer Data Protection Act (CDPA) will make several changes to increase security and privacy on January 1, 2023. These changes will be applicable to organizations that do business in Virginia or with residents in Virginia. Data collectors need to obtain approval to utilize their private data. These changes also try to determine the adequacy of privacy and security of AI vendors, which may require the removal of that data.

These are just a few simple examples of how data regulations will take shape in the US. What does this look like for the rest of the world? Some estimate that by 2024, 75% of the global population will have personal data covered by privacy regulations of one type or another.

Another example of major data protection regulation is Brazil’s Lei Geral de Proteção de Dados Pessoais (LGPD) which has been in force since September 2020. It replaced dozens of laws in the country related to data privacy. LGPD was modeled after GDPR, and the contents are almost identical. In Asia, Japan was the first country to introduce a data protection regulation: the Act on the Protection of Personal Information (APPI) was adopted in 2003 and amended in 2015. In April 2022, the latest version of APPI was put in force to address modern concerns over data privacy.

FL has been identified as a critical technology that can work well with privacy regulations and regulatory compliance in different domains.

From privacy by design to data minimalism

Organizations have been acclimatizing to these regulations. TrustArc’s Global Privacy Benchmarks Survey 2021 found that the number of enterprises with a dedicated privacy office is increasing: 83% of respondents in the survey had a privacy office, whereas the rate was only 67% in 2020. 85% had a strategic and reportable privacy management program in place, yet 73% of them believed that they could do more to protect privacy. Their eagerness is hardly surprising as 34% of the respondents claimed that they had faced a data breach in the previous 3 years, the costly consequences of which was mentioned previously in this chapter. A privacy office would be led by a data protection officer (DPO) who is responsible for the company’s Data Protection Impact Assessment (DPIA) in order to comply with regulations such as GDPR that demand accountability and documentation of personal data handling. DPOs are also responsible for monitoring and ensuring that personal data is treated by their organizations in compliance with the law, and the top management and board are supposed to provide necessary support and resources to DPOs to allow them to complete their task.

In the face of GDPR, the current trend in data protection is shifting toward data minimalism. Data minimalism in this context does not necessarily encourage minimization of the size of data; it pertains more directly to minimizing PII factors in data so that individuals cannot be identified with its data points. Therefore, data minimalism affects AI sectors in their ability to create a high-performing AI application because a shortage in data variety for the ML process simply generates ML model biases with unsatisfying performance in prediction.

The abundance mindset for big data introduced at the beginning of the chapter has thus been disciplined by the public concern over data privacy. The risk of being fined for violating data protection regulations, coupled with the wasteful cost of having a data graveyard, calls for practicing data minimalism rather than data abundance.

That is why FL is becoming a must-have solution for many AI solution providers such as medical sectors that are struggling with public concerns and data privacy, which basically becomes an issue when a third-party entity needs to collect private data for improving the quality of ML models and their applications. As mentioned, FL is a promising framework for privacy-preserving AI because learning of the data can happen anywhere; even if the data is not available for the AI service providers, all we have to do is collect and aggregate trained ML models in a consistent way.

Now, let’s consider another facet of the Triple-A mindset for big data being challenged: acceptance of messy data.

Impacts of training data and model bias

The sheer volume of big data annihilates the treacherous reality of garbage in, garbage out. Or does it? In fact, the messiness of data can only be accepted if enough data from a variety of sources and distributions can be fully learned without causing any biases in the outcomes of the learning. The actual training of the big data in a centralized location does take a lot of time and huge computational resources and storage. Also, we would probably have to find methods to measure and reduce model bias without directly collecting and accessing sensitive and private data, which would conflict with some of the privacy regulations discussed previously. FL also has an aspect of distributed and collaborative learning, which becomes critical to eliminate data and model bias to absorb the messiness of the data. With collaborative and distributed learning, we could significantly increase the data accessibility and efficiency of an entire learning process that is often very expensive and time-consuming. It gives us a chance to break through the limitation that big data training used to have, as discussed in the following sections.

Expensive training of big data

According to the report: https://www.flexera.com/blog/cloud/cloud-computing-trends-2022-state-of-the-cloud-report, 37% of enterprises annually spend more than $12 million and 80% spend over $1.2 million per year for public cloud. The training cost over the cloud is not cheap, and it can easily be assumed that this cost is going to boost significantly, together with the increasing demand for AI and ML. Sometimes, big data cannot be fully trained for ML because of the following issues:

  • Big data storage: Big data storage is an architecture for compute and storage that collects and manages large amounts of datasets for AI applications or real-time analytics. Worldwide enterprise companies are paying more than $100 billion just for cloud storage and data center costs (https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle-scale-growth-repatriation-optimization/). While some of the datasets are critical for the applications they provide, what they really want is often business intelligence that can be extracted from the data, not just the data itself.
  • Significant training time: Building and training an ML model that can be delivered as an authentic product basically takes a significant amount of time, not only for the training process but also for the preparation of the ML pipelines. Therefore, in many cases, the true value of the intelligence is going to be lost by the time the ML model is delivered.
  • Huge computation: Training of an ML model often consumes significant computational resources. For example, an ML task of manipulating pieces such as a Rubik’s Cube using a robotic hand could sometimes require more than 1,000 computers. It could also take a dozen machines just to run some specialized graphics chips for several months.
  • Communications latency: To form big data, especially in the cloud, a significant amount of data needs to be transferred to the server, which in itself causes communications latency. In most use cases, FL requires much less data to be transferred from local devices or learning environments to a server called an aggregator that is there to synthesize the local ML models collected from those devices.
  • Scalability: In traditional centralized systems, scalability becomes an issue because of the complexity of big data and its costly infrastructures such as huge storage and computing resources in the cloud server environment. In an FL server, only an aggregation is conducted to synthesize the multiple local models that have been trained to update the global model. Therefore, both the system and learning scalability increase significantly as ML training is conducted on edge devices in a distributed manner, not only in a single centralized learning server.

FL effectively utilizes distributed computational resources that can be used for light training of the ML models. Whether training happens on actual physical devices or virtual instances of the cloud system, parallelizing the model training process into distributed environments often accelerates the speed of learning itself.

In addition, once the trained models are collected, the FL system can quickly synthesize them to generate an updated ML model called a global model that absorbs enough learnings at the edge sides, and thus delivering the intelligence in near real time is possible.

Model bias and training data

ML bias happens when an ML algorithm generates results that are systemically prejudiced because of erroneous assumptions in the ML process. ML bias is also sometimes called algorithm bias or AI bias.

Yann LeCun, the 2018 Turing Award winner for his outstanding contribution to the development of DL, says “ML systems are biased when data is biased” (https://twitter.com/ylecun/status/1274782757907030016). This comes from a computer vision (CV) model trained with the Flickr-Faces-HQ dataset compiled by the Nvidia team. Based on the face upsampling system, many people are classified as white as the network was pre-trained on Flickr-Faces-HQ data mainly containing pictures of white people. For this problem of misclassification of the people, the architecture of the model is not the issue that mandates this output. Hence, the conclusion is that a racially skewed dataset generated a neutral model to produce biased outcomes.

Productive conversations about AI and ML biases have been led by the former lead of AI Ethics at Google. The 2018 publication of the Gender Shades paper demonstrated race and gender bias in major facial recognition models, and lawmakers in Congress have sought to prohibit the use of the technology by the US federal government. Tech companies including Amazon, IBM, and Microsoft also agreed to suspend or terminate sales of facial recognition models to the police. They are encouraged to use an interventionist approach to data collection by advising scientists and engineers to specify the objectives of model development, form a strict policy for data collection, and conduct a thorough appraisal of collected data to avoid biases—details are available on the FATE/CV website (https://sites.google.com/view/fatecv-tutorial/home).

FL could be one of the most promising ML technologies to overcome data-silo issues. Very often, the data is not even be accessible or usable for the training, causing a significant bias in data and models. Naturally, FL is useful for overcoming bias by resolving the issues of data privacy and silos that become the bottleneck to fundamentally avoiding data bias. In this context, FL is becoming a breakthrough in the implementation of big data services and applications, as thoroughly investigated in https://arxiv.org/pdf/2110.04160.pdf.

Also, there are several techniques that try to mitigate model bias in FL itself, such as Reweighing and Prejudice Remover, both detailed in https://arxiv.org/pdf/2012.02447.pdf.

Model drift and performance degradation

Model drift is generally about the degradation of ML model performance because of changes in data and relationships between input and output (I/O) variables, known as model decay, as well. Model drift can be addressed by continuous learning to adapt to the latest changes in datasets or environments in near real time. One of the important aspects of FL is realizing a continuous learning framework by updating an ML model instantly whenever the learning happens in the local distributed environment anytime, in a consistent manner. That way, FL could resolve the situation often seen in enterprise AI applications where the intelligence is useless by the time it is delivered for production.

We will now touch on how models could get degraded or stop working, and then some of the current efforts of model operations (ModelOps) to continuously improve the performance of models and achieve sustainable AI operations.

How models can stop working

Any AI and ML model with fixed parameters, or weights, generated from the training data and adjusted to the test data can perform fairly well when deployed in an environment where the model receives data similar to the training and test data. If an autonomous driving model is well trained with data recorded during sunny daytime, the model can drive vehicles safely on sunny days because it is doing what it has been trained to do. On a rainy night, however, nobody should be in or near the vehicle if it is autonomously driven: the model is fed with totally unfamiliar, dark, and blurry images; its decisions will not be reliable at all. In such a situation, the model’s decision will be far off the track, hence the name model drift. Again, model drift is not likely to happen if the model is deployed in an environment similar to the training and testing environment and if the environment does not change significantly over time. But in many business situations, that assumption does not always hold, and model drift becomes a serious issue.

There are two types of model drifts: data drift and concept drift. Data drift happens when input data to a deployed model is significantly different from the data the model has been trained with. In other words, changes in data distribution are the cause of data drift. The aforementioned diurnal autonomous vehicle model not performing well in the nighttime is an example of data drift. Another example would be an ice-cream sale prediction model trained in California being deployed in New Zealand; seasonality in the southern hemisphere is opposite to that in the northern hemisphere, and the estimated sales of ice cream will be low for summer and high for winter, on the contrary to the actual sales volume.

Concept drift, on the other hand, is a result of changes in how variables correlate with each other. In the terminology of statistics, this implies that the data-generating process has been altered. And this is what Google Flu Trends (GFT) suffered from, as the author of The Undercover Economist put it in the following Financial Times article: https://www.ft.com/content/21a6e7d8-b479-11e3-a09a-00144feabdc0#axzz30qfdzLCB.

Prior to the period, search queries were meaningfully correlated with the spread of flu as mainly people who suspected that they were infected typed those words in the browser, and therefore the model worked successfully. This may no longer have been the case in 2013 since people in other categories, such as those who were precautious about a potential pandemic or those who were just curious, were searching for those words, and they may have been led to do so by Google’s recommendations. This concept drift likely made GFT overestimate the spread vis-à-vis medical reports provided by the Centers for Disease Control and Prevention (CDC).

Either by data or by concept, model drift causes model performance degradation, and it occurs because of our focus on correlation. The ground truth in data science parlance does not mean something like the universal truth in hard science such as physics and chemistry—that is, causation. It is merely a true statement about how variables in given data correlate with each other in a particular environment, and it provides no guarantee that the correlation holds when the environment changes or differs. This is to say that what we estimate as the ground truth can vary over time and locations, just like the ground has been reshaped by seismic events throughout history and geography.

Continuous monitoring – the price of letting causation go

In a survey commissioned by Redis Labs (https://venturebeat.com/business/redis-survey-finds-ai-is-stressing-it-infrastructure-to-breaking-point/), about half of the respondents cited model reliability (48%), model performance (44%), accuracy over time (57%), and latency of running the model (51%) as the top challenges for getting models deployed. Given the risk and concern of model drift, AI and ML model stakeholders need to work on two additional tasks after deployment. First, model performance must be continuously monitored to detect model drift. Both data drift and concept drift can take place gradually or suddenly. Once model drift is detected, the model needs to be retrained with new training data, and when concept drift occurs, even the use of a new model architecture may be necessary to upgrade the model.

In order to address these requirements, a new ML principle called Continuous Delivery for Machine Learning (CD4ML) has been proposed. In the framework of CD4ML, a model is coded and trained with training data in the first step. The model is then tested with a separate dataset and evaluated based on some metrics, and more often than not, the best model is selected from multiple candidates. Next, the selected model is productionized with a further test to make sure that the model performs well after the deployment, and once it passes the test, it is deployed. Here, the monitoring process starts. When model drift is observed, the model will be retrained with new data or given a new architecture, depending on the severity of the drift. If you are familiar with software engineering, you might have noticed that CD4ML is the adoption of continuous integration/continuous delivery (CI/CD) in the field of ML. In a similar vein, ModelOps, an AI and ML operational framework stemming from the development-operations (DevOps) software engineering framework is gaining popularity. ModelOps bridges ML operations (MLOps: the integration of data engineering and data science) and application engineering; it can be seen as the enabler of CD4ML.

The third factor of the Triple-A mindset for big data lets us focus on correlation and has helped in building AI and ML models rapidly over the last decade. Finding correlation is much easier than discovering causation. For many AI and ML models that have been telling us what we need to know from people’s Google search patterns over years, we have to check if it still works today. And so do we tomorrow.

That is why FL is one of the important approaches for continuous learning. When creating and operating an FL system, it is also important to develop the system with ModelOps functionalities, as the critical role of FL is to keep improving models constantly from various learning environments in a collaborative manner. It is even possible to realize a crowdsourced learning framework with FL so that people in the platform can take the desired ML model to adapt and train it locally and return an updated model to the FL server with an aggregator. With an advanced model aggregation framework to filter out poisonous ML models that could potentially degrade the current models, FL can consistently integrate other learnings, and thus realize a sustainable continuous learning operation that is key for the platform with ModelOps functionalities.

FL as the main solution for data problems

So far in this chapter, we confirmed that big data has issues to be addressed. Data privacy must be preserved in order to protect not only individuals but also data users who would face risks of data breaches and subsequent fines. Biases in a set of big data can affect inference significantly through proxies, even when factors about gender and race are omitted, and focus on correlation rather than causation makes predictive models vulnerable to model drift.

Here, let us discuss the difference between a traditional big data ML system and an FL system in terms of their architectures, processes, issues, and benefits. The following diagram depicts a visual comparison between a traditional big data ML system and an FL system:

Figure 1.3 – Comparison between traditional big data ML system and FL system

Figure 1.3 – Comparison between traditional big data ML system and FL system

In the traditional big data system, data is gathered to create large data stores. These large data stores are used to solve a specific problem using ML. The resulting model displays strong generalizability due to the volume of data it is trained on and is eventually deployed.

However, continuous data collection uses large amounts of communication bandwidth. In privacy-focused applications, the transmission of data may be banned entirely, making model creation impossible. Training large ML models on big data stores is computationally expensive, and traditional centralized training efficiency is limited by single-machine performance. Slow training processes lead to long delays between incremental model updates, leading to a lack of flexibility in accommodating new data trends.

On the other hand, in an FL system, ML training is performed directly at the location of the data. The resulting trained models are collected at the central server. Aggregation algorithms are used to produce an aggregated model from the collected models. The aggregated model is sent back to the data locations for further training.

FL approaches often incur overhead to set up and maintain training performance with distributed-system settings. However, even with a bit more complicated architecture and settings, there are benefits that excel its complication. Training is performed at the data location, so data is never transmitted, maintaining data privacy. Training can be performed asynchronously across a variable number of nodes, which results in efficient and easily scalable distributed learning. Only model weights are transmitted between server and nodes, thus FL is efficient in communication. Advanced aggregation algorithms can maintain training performance even in restricted scenarios and increase efficiency in standard ML scenarios too.

The vast majority of all AI projects do not seem to be delivered, or simply fall short altogether. To deliver an authentic AI application and product, all the issues discussed previously need to be considered seriously. It is obvious that FL, together with other key technologies to deal with local data processed by the ML pipeline and engine, is getting to be a critical solution to resolve data-related problems in a continuous and collaborative manner.

How can we harness the power of AI and ML to optimize the technical system for society in its entirety—that is, bring about a more joyous, comfortable, convenient, and safe world while being data minimalistic and ethical, as well as delivering improvements continuously? We contend that the key is a collective intelligence or intelligence-centric platform, also discussed in Chapter 10, Future Trends and Developments. In subsequent chapters of the book, we introduce the concept, design, and implementation of an FL system as a promising technology for orchestrating collective intelligence with networks of AI and ML models to fulfill those requirements discussed so far.

Summary

This chapter provided an overview of how FL could potentially solve many of the big data issues by first understanding the definition of big data and its nature, involving an abundance of observations, acceptance of messiness, and ambivalence of causality.

We have learned about privacy regulations in a variety of forms from many regions and the risk of data breaches and privacy violations that eventually lead to loss of profits, as well as a bottleneck in creating authentic AI applications. Federated learning, by design, will not collect any raw data and can preserve data privacy and follow those regulations.

In addition, with an FL framework, we can reduce inherent bias that affects the performance of ML models and minimize model drift with a continuous learning framework. Thus, a distributed and collaborative learning framework such as FL is required for a more cost-effective and efficient approach based on FL.

This introductory chapter concluded with the potential of FL as a primary solution for the aforementioned big data problems based on the paradigm-shifting idea of collective intelligence that could potentially replace the current mainstream data-centric platforms.

In the next chapter, we will see where in the landscape of data science FL fits and how it can open a new era of ML.

Further reading

To learn more about the topics that were covered in this chapter, please take a look at the following references:

  • Algorithmia. (2021). 2021 Enterprise Trends in Machine Learning. Seattle: Algorithmia.
  • Mayer-Schönberger, V. and Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. Boston/New York: Eamon Dolan/Houghton Mifflin Harcourt.
  • The Economist. (2010, February 27). Data, data everywhere. The Economist.
  • Data Privacy Manager. (2021, October 1). Data Privacy vs. Data Security [definitions and comparisons]. Data Privacy Manager.
  • IBM. (2021). Cost of a Data Breach Report 2021. New York: IBM.
  • Burgess, M. (2020, March 24). What is GDPR? The summary guide to GDPR compliance in the UK. Wired.
  • TrustArc. (2021). Global Privacy Benchmarks Survey 2021. Walnut Creek: TrustArc.
  • Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M. and Turner, E. (2019, November 15). Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information. Pew Research Center.
  • Hes, R. and Borking, J. (1995). Privacy-Enhancing Technologies: The Path to Anonymity. Hague: Information and Privacy Commissioner of Ontario.
  • Goldsteen, A., Ezov, G., Shmelkin, R., Moffie, M. and Farkash, A. (2021). Data minimization for GDPR Compliance in machine learning models. AI and Ethics, 1-15.
  • Knight, W. (2019, November 19). The Apple Card Didn’t ‘See’ Gender—and That’s the Problem. Wired.
  • Gebru, T. and Denton, E. (2020). Tutorial on Fairness Accountability Transparency and Ethics in Computer Vision at CVPR 2020. Available online at https://sites.google.com/view/fatecv-tutorial/home.
  • Ukanwa, K. (2021, May 3). Algorithmic bias isn’t just unfair — it’s bad for business. The Boston Globe.
  • O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
  • Blackman, R. (2020, October 15). A Practical Guide to Building Ethical AI. Harvard Business Review.
  • Ginsberg, J., Mohebbi, M., Patel, R., Brammer, L., Smolinski, M. S. and Brilliant, L. (2009). Detecting influenza epidemics using search engine query data. Nature 457, 1012–1014.
  • Anderson, C. (2008, June 23). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired.
  • Butler, D. (2013). When Google got flu wrong. Nature 494, 155–156.
  • Harford, T. (2014, March 28). Big data: are we making a big mistake?. Financial Times.
  • Dral, E. and Samuylova, E. (2020, November 12). Machine Learning Monitoring, Part 5: Why You Should Care About Data and Concept Drift. Evidently AI Blog.
  • Forrester Consulting. (2021). Deploy ML Models To In-Memory: Databases For Blazing Fast Performance. Retrieved from https://redis.com/wp-content/uploads/2021/06/forrester-ai-opportunity-snapshot.pdf.
  • Sato, D., Wider, A. and Windheuser, C. (2019, September 19). Continuous Delivery for Machine Learning: Automating the end-to-end lifecycle of Machine Learning applications. Retrieved from martinFowler.com at https://martinfowler.com/articles/cd4ml.html.
  • Verma, D. C. (2021). Federated AI for Real-World Business Scenarios. New York: CRC Press.
  • Bostrom, R. P. and Heinen, J. S. (1977). MIS problems and failures: A socio-technical perspective. Part I: The causes. MIS Quarterly, 1(3), pp. 17.
  • Weld, D. S., Lin, C. H. and Bragg, J. (2015). Artificial intelligence and collective intelligence. Handbook of Collective Intelligence, 89-114.
  • Abay, A., Zhou, Y., Baracaldo, N., Rajamoni, S., Chuba, E. and Ludwig, H. Mitigating Bias in Federated Learning. Available at https://arxiv.org/pdf/2012.02447.pdf.
  • Big Data: A Revolution That Will Transform How We Live, Work, and Think (https://www.amazon.com/Big-Data-Revolution-Transform-Think/dp/0544227751
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Design distributed systems that can be applied to real-world federated learning applications at scale
  • Discover multiple aggregation schemes applicable to various ML settings and applications
  • Develop a federated learning system that can be tested in distributed machine learning settings

Description

Federated learning (FL) is a paradigm-shifting technology in AI that enables and accelerates machine learning (ML), allowing you to work on private data. It has become a must-have solution for most enterprise industries, making it a critical part of your learning journey. This book helps you get to grips with the building blocks of FL and how the systems work and interact with each other using solid coding examples. FL is more than just aggregating collected ML models and bringing them back to the distributed agents. This book teaches you about all the essential basics of FL and shows you how to design distributed systems and learning mechanisms carefully so as to synchronize the dispersed learning processes and synthesize the locally trained ML models in a consistent manner. This way, you’ll be able to create a sustainable and resilient FL system that can constantly function in real-world operations. This book goes further than simply outlining FL's conceptual framework or theory, as is the case with the majority of research-related literature. By the end of this book, you’ll have an in-depth understanding of the FL system design and implementation basics and be able to create an FL system and applications that can be deployed to various local and cloud environments.

Who is this book for?

This book is for machine learning engineers, data scientists, and artificial intelligence (AI) enthusiasts who want to learn about creating machine learning applications empowered by federated learning. You’ll need basic knowledge of Python programming and machine learning concepts to get started with this book.

What you will learn

  • Discover the challenges related to centralized big data ML that we currently face along with their solutions
  • Understand the theoretical and conceptual basics of FL
  • Acquire design and architecting skills to build an FL system
  • Explore the actual implementation of FL servers and clients
  • Find out how to integrate FL into your own ML application
  • Understand various aggregation mechanisms for diverse ML scenarios
  • Discover popular use cases and future trends in FL

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 28, 2022
Length: 326 pages
Edition : 1st
Language : English
ISBN-13 : 9781803248752
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Oct 28, 2022
Length: 326 pages
Edition : 1st
Language : English
ISBN-13 : 9781803248752
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 146.97
Modern Time Series Forecasting with Python
$52.99
Machine Learning Techniques for Text
$46.99
Federated Learning with Python
$46.99
Total $ 146.97 Stars icon
Banner background image

Table of Contents

15 Chapters
Part 1 Federated Learning – Conceptual Foundations Chevron down icon Chevron up icon
Chapter 1: Challenges in Big Data and Traditional AI Chevron down icon Chevron up icon
Chapter 2: What Is Federated Learning? Chevron down icon Chevron up icon
Chapter 3: Workings of the Federated Learning System Chevron down icon Chevron up icon
Part 2 The Design and Implementation of the Federated Learning System Chevron down icon Chevron up icon
Chapter 4: Federated Learning Server Implementation with Python Chevron down icon Chevron up icon
Chapter 5: Federated Learning Client-Side Implementation Chevron down icon Chevron up icon
Chapter 6: Running the Federated Learning System and Analyzing the Results Chevron down icon Chevron up icon
Chapter 7: Model Aggregation Chevron down icon Chevron up icon
Part 3 Moving Toward the Production of Federated Learning Applications Chevron down icon Chevron up icon
Chapter 8: Introducing Existing Federated Learning Frameworks Chevron down icon Chevron up icon
Chapter 9: Case Studies with Key Use Cases of Federated Learning Applications Chevron down icon Chevron up icon
Chapter 10: Future Trends and Developments Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.9
(12 Ratings)
5 star 91.7%
4 star 8.3%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Amazon Customer Jan 09, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I found this book really useful to see what kind of use cases are out there and currently pursued especially in industry not just in academia. As the federated learning (FL) technology is becoming hot in the field of AI, the business opportunities seem to be also very big. I have been involved in many AI projects and most of the projects seem to be failing just because the AI does not perform. I was looking at federated learning as a way to breakthrough this challenge. This book gives deeper insights not just only for the technological frameworks of FL but also for the potential businesses that the FL can contribute to.For example, the healthcare field like drug recovery is a well-known area that FL is used, but the real industry FL projects like robotics and autonomous vehicles cannot be found in any other books or web resources. They found solid applicability to those fields and encourage the readers to look into new fields not restricted within the established fields like healthcare and finances.I highly recommend this book not only just for AI engineers but also for business leaders to get a deeper insight about where AI is headed for.
Amazon Verified review Amazon
Steven Fernandes Feb 23, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This comprehensive guide provides readers with the necessary building blocks to design and implement distributed systems that can be applied to real-world federated learning applications at scale.
Amazon Verified review Amazon
YYY-SSS Feb 15, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
"Federated Learning with Python" is praised for its practical, accessible approach to federated learning, making it an excellent resource for both researchers and practitioners. The book stands out for its clarity in explaining complex concepts and offering a hands-on guide to developing federated learning applications using Python. It goes beyond theory, providing real-life examples, code snippets, and detailed algorithm explanations. This makes it invaluable for those interested in applying federated learning in real-world projects, addressing challenges such as performance optimization and data heterogeneity. It's highlighted as a must-read for anyone looking to explore federated learning's potential, ensuring readers can develop innovative, privacy-preserving applications.
Amazon Verified review Amazon
Dror Dec 30, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Federated Learning (FL) is an emerging, disruptive technology that radically changes the way enterprises in certain industries work with data and enable data privacy. If you're a data scientist or machine learning (ML) expert working on healthcare, finance or IoT applications, this book is a must-read. For all other ML practitioners, I highly recommend this book as the best resource on the market to get acquainted with the emerging and increasingly important technology of FL and get practical advice on how it can be implemented in real-world applications.The book goes both broadly and deeply into the different aspects of FL. It provides an in-depth coverage of the foundations of FL, the design and implementation of FL systems (both server- and client-side), and production-related aspects of FL. It also provides a nice overview of future trends in FL.In contrast to many other resources on FL, this book covers both theoretical and practical aspects of FL. It begins with the necessary foundations, and then guides the reader on building an application based on FL that can be deployed in either local or cloud environments.It will prove to be a highly useful resource for learning FL for any data scientist, ML engineer or AI practitioner with basic familiarity with ML and the Python programming language.Highly recommended!
Amazon Verified review Amazon
Colby Mainard Mar 23, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
And looks forward at new ML architectures that aren’t widely discussed.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.