Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Observability for Data Engineering

You're reading from   Data Observability for Data Engineering Proactive strategies for ensuring data accuracy and addressing broken data pipelines

Arrow left icon
Product type Paperback
Published in Dec 2023
Publisher Packt
ISBN-13 9781804616024
Length 228 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Michele Pinto Michele Pinto
Author Profile Icon Michele Pinto
Michele Pinto
Sammy El Khammal Sammy El Khammal
Author Profile Icon Sammy El Khammal
Sammy El Khammal
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Introduction to Data Observability
2. Chapter 1: Fundamentals of Data Quality Monitoring FREE CHAPTER 3. Chapter 2: Fundamentals of Data Observability 4. Part 2: Implementing Data Observability
5. Chapter 3: Data Observability Techniques 6. Chapter 4: Data Observability Elements 7. Chapter 5: Defining Rules on Indicators 8. Part 3: How to adopt Data Observability in your organization
9. Chapter 6: Root Cause Analysis 10. Chapter 7: Optimizing Data Pipelines 11. Chapter 8: Organizing Data Teams and Measuring the Success of Data Observability 12. Part 4: Appendix
13. Chapter 9: Data Observability Checklist 14. Chapter 10: Pathway to Data Observability 15. Index 16. Other Books You May Enjoy

Exploring the seven dimensions of data quality

Now that you understand why data quality is important, let’s learn how to measure the quality of datasets.

A lot has already been explained about data quality in the literature – for instance, we can cite the DAMA (Data Management Body of Knowledge) book. In this chapter, we have decided to focus on seven dimensions: accuracy, completeness, consistency, conformity, integrity, timeliness, and uniqueness:

Figure 1.4 – Dimensions of data quality

Figure 1.4 – Dimensions of data quality

In the following sections, we will cover these dimensions in detail and explain the potential business impacts of poor-quality data in those dimensions.

Accuracy

The accuracy of data defines its ability to reflect the actual data. In a CRM, the contract type of a customer should be correctly associated with the right customer ID. Otherwise, marketing action could be wrongly targeted. For instance, in Figure 1.5, where the dataset was wrongly copied into the second one, you can see that the email addresses were mixed up:

Figure 1.5 – Example of inaccurate data

Figure 1.5 – Example of inaccurate data

Completeness

Data is considered complete if it contains all the data needed for the consumers. It is about getting the right data for the right process. A dataset can be incomplete if it does not contain an expected column, or if missing values are present. However, if the removed column is not used by your process, you can evaluate the dataset as complete for your needs. In Figure 1.6, the page_visited column is missing, while other columns are missing values. This is very annoying for the marketing team, who are in charge of sending emails, as they cannot contact all their customers:

Figure 1.6 – Example of incomplete data

Figure 1.6 – Example of incomplete data

The preceding case is a clear example of where data producers’ incentives can be different from customers’. Maybe the producer left empty cells to increase sales conversions as filling in the email address may create friction. However, for a consumer using the data for an email campaign, this field is crucial.

Consistency

The way data is represented should be consistent across the system. If you record the addresses of customers and need the ZIP code for the model, the records have to be consistent, meaning that you will not record the city for some customers and the ZIP code for others. At a technical level, the presentation of data can be inconsistent too. Look at Figure 1.7 – the has_confirmed column recorded Booleans as numbers for the first few rows and then used strings.

In this example, we can suppose the data source is a file, where fields can be easily changed. In a relational database management system (RDMS), this issue can be avoided as the data type cannot be changed:

Figure 1.7 – Example of inconsistent data

Figure 1.7 – Example of inconsistent data

Conformity

Data should be collected in the right format. For instance, page_visited can only contain integer numbers, order_id should be a string of characters, and in another dataset, a ZIP code can be a combination of letters and numbers. In Figure 1.8, you would expect to see @ in the email address, but you can only see a username:

Figure 1.8 – Example of improper data

Figure 1.8 – Example of improper data

Integrity

Data transformation should ensure that data items keep the same relationship. Integrity means that data is connected correctly and you don’t have standalone data – for instance, an address not connected to any customer name. Integrity ensures the data in the dataset can be traced and connected to other data.

In Figure 1.9, an engineer has extracted the duration column, which is useless without order_id:

Figure 1.9 – Example of an integrity issue

Figure 1.9 – Example of an integrity issue

Timeliness

Time is also an important dimension of data quality. If you run a weekly report, you want to be sure that the data is up to date and that the process runs at the correct time. For instance, if a weekly sales report is created, you expect to receive a report on last week’s sales. If you receive outdated data, because the database was not updated with the new week’s data, you may see that the total of this week’s report is the same as last week’s, which will lead to wrong assumptions.

Time-sensitive data, if not delivered on time, can lead to inaccurate insights, misinformed decisions, and, ultimately, monetary losses.

Uniqueness

To ensure there is no ambiguous data, the uniqueness dimension is used. A data record should not be duplicated and should not contain overlaps. In Figure 1.10, you can see that the same order ID was used for two distinct orders, and an order was recorded twice, assuming that there is no primary key defined in the dataset. This kind of data discrepancy can lead to various issues, such as incorrect order tracking, inaccurate inventory management, and potentially negative customer experiences:

Figure 1.10 – Example of duplicated data

Figure 1.10 – Example of duplicated data

Consequences of data quality issues

Data quality issues may have disastrous effects and negative consequences. When the quality standard is not met, the consumer can face various consequences.

Sometimes, a report won’t be created because the data quality issue will result in a pipeline issue, leading to delays in the decision-making process. Other times, the issue can be more subtle. For instance, because of a completeness issue, the marketing team could send emails with "Hello {firstname}", destroying their professional appearance to the customers. The result can damage the company’s profit or reputation.

Nevertheless, it is important to note that not all data quality issues will lead to a catastrophic outcome. Indeed, the issue only happens if the data item is part of your pipeline. The consumer won’t experience issues with data they don’t need. However, this means that to ensure the quality of the pipeline, the producer needs to know what is done with the data, and what is considered important for the data consumer. A data source that’s important for your project may have no relevance for another project.

This is why in the first stages of the project, a new approach must be implemented. The producer and the consumer have to come together to define the quality expectations of the pipeline. In the next section, we will see how this can be implemented with service-level agreements (SLAs) and service-level objectives (SLOs).

You have been reading a chapter from
Data Observability for Data Engineering
Published in: Dec 2023
Publisher: Packt
ISBN-13: 9781804616024
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image