Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Wrangling on AWS

You're reading from   Data Wrangling on AWS Clean and organize complex data for analysis

Arrow left icon
Product type Paperback
Published in Jul 2023
Publisher Packt
ISBN-13 9781801810906
Length 420 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Sankar M Sankar M
Author Profile Icon Sankar M
Sankar M
Navnit Shukla Navnit Shukla
Author Profile Icon Navnit Shukla
Navnit Shukla
Sam Palani Sam Palani
Author Profile Icon Sam Palani
Sam Palani
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Part 1:Unleashing Data Wrangling with AWS
2. Chapter 1: Getting Started with Data Wrangling FREE CHAPTER 3. Part 2:Data Wrangling with AWS Tools
4. Chapter 2: Introduction to AWS Glue DataBrew 5. Chapter 3: Introducing AWS SDK for pandas 6. Chapter 4: Introduction to SageMaker Data Wrangler 7. Part 3:AWS Data Management and Analysis
8. Chapter 5: Working with Amazon S3 9. Chapter 6: Working with AWS Glue 10. Chapter 7: Working with Athena 11. Chapter 8: Working with QuickSight 12. Part 4:Advanced Data Manipulation and ML Data Optimization
13. Chapter 9: Building an End-to-End Data-Wrangling Pipeline with AWS SDK for Pandas 14. Chapter 10: Data Processing for Machine Learning with SageMaker Data Wrangler 15. Part 5:Ensuring Data Lake Security and Monitoring
16. Chapter 11: Data Lake Security and Monitoring 17. Index 18. Other Books You May Enjoy

The steps involved in data wrangling

Similar to crude oil, raw data has to go through multiple data wrangling steps to become meaningful. In this section, we are going to learn the six-step process involved in data wrangling:

  1. Data discovery
  2. Data structuring
  3. Data cleaning
  4. Data enrichment
  5. Data validation
  6. Data publishing

Before we begin, it’s important to understand these activities may or may not need to be followed sequentially, or in some cases, you may skip any of these steps.

Also, keep in mind that these steps are iterative and differ for different personas, such as data analysts, data scientists, and data engineers.

As an example, data discovery for data engineers may vary from what data discovery means for a data analyst or data scientist:

Figure 1.5: The steps of the data-wrangling process

Figure 1.5: The steps of the data-wrangling process

Let’s start learning about these steps in detail.

Data discovery

The first step of the data wrangling process is data discovery. This is one of the most important steps of data wrangling. In data discovery, we familiarize ourselves with the kind of data we have as raw data, what use case we are looking to solve with that data, what kind of relationships exist between the raw data, what the data format will look like, such as CSV or Parquet, what kind of tools are available for storing, transforming, and querying this data, and how we wish to organize this data, such as by folder structure, file size, partitions, and so on to make it easy to access.

Let’s understand this by looking at an example.

In this example, we will try to understand how data discovery varies based on the persona. Let’s assume we have two colleagues, James and Jean. James is a data engineer while Jean is a data analyst, and they both work for a car-selling company.

Jean is new to the organization and she is required to analyze car sales numbers for Southern California. She has reached out to James and asked him for data from the sales table from the production system.

Here is the data discovery process for Jane (a data analyst):

  1. Jane has to identify the data she needs to generate the sales report (for example, sales transaction data, vehicle details data, customer data, and so on).
  2. Jane has to find where the sales data resides (a database, file share, CRM, and so on).
  3. Jane has to identify how much data she needs (from the last 12 months, the last month, and so on).
  4. Jane has to identify what kind of tool she is going to use (Amazon QuickSight, Power BI, and so on).
  5. Jane has to identify the format she needs the data to be in so that it works with the tools she has.
  6. Jane has to identify where she is looking to store this data – in a data lake (Amazon S3), on her desktop, a file share, and sandbox environment, and so on.

Here is the data discovery process for James (a data engineer):

  1. Which system has requested data? For example, Amazon RDS, Salesforce CRM, Production SFTP location, and so on.
  2. How will the data be extracted? For example, using services such as Amazon DMS or AWS Glue or writing a script.
  3. What will the schedule look like? Daily, weekly, or monthly?
  4. What will the file format look like? For example, CSV, Parquet, orc, and so on.
  5. How will the data be stored in the provided store?

Data structuring

To support existing and future business use cases to serve its customers better, the organization must collect unprecedented amounts of data from different data sources and in different varieties. In modern data architecture, most of the time, the data is stored in data lakes since a data lake allows you to store all kinds of data files, whether it is structured data, unstructured data, images, audio, video, or something else, and it will be of different shapes and sizes in its raw form. When data is in its raw form, it lacks a definitive structure, which is required for it to be stored in databases or data warehouses or used to build analytics or machine learning models. At this point, it is not optimized for cost and performance.

In addition, when you work with streaming data such as clickstreams and log analytics, not all the data fields (columns) are used in analytics.

At this stage of data wrangling, we try to optimize the raw dataset for cost and performance benefits by performing partitioning and converting file types (for example, CSV into Parquet).

Once again, let’s consider our friends James and Jean to understand this.

For Jean, the data analyst, data structuring means that she is looking to do direct queries or store data in a memory store of a BI tool, in the case of Amazon QuickSight called the SPICE layer, which provides faster access to data.

For James, the data engineer, when he is extracting data from a production system and looking to store it in a data lake such as Amazon S3, he must consider what the file format will look like. He can partition it by geographical regions, such as county, state, or region, or by date – for example, year=YYYY, month=MM, and day=DD.

Data cleaning

The next step of the data wrangling process is data cleaning. The previous two steps give us an idea of how the data looks and how it is stored. In the data cleaning step, we start working with raw data to make it meaningful so that we can define future use cases.

In the data cleaning step, we try to make data meaningful by doing the following:

  • Removing unwanted columns, duplicate values, and filling null value columns to improve the data’s readiness
  • Performing data validation such as identifying missing values for mandatory columns such as First Name, Last Name, SSN, Phone No., and so on
  • Validating or fixing data type for better optimization of storage and performance
  • Identifying and fixing outliers
  • Removing garbage data or unwanted values, such as special characters

Both James and Jane can perform similar data cleaning tasks; however, their scale might vary. For James, these tasks must be done for the entire dataset. For Jane, they may only have to perform them on the data from Southern California, and granularity might vary as well. For James, maybe it is only limited to regions such as Southern California, Northern California, and so on, while for Jean, it might be city level or even ZIP code.

Data enrichment

Up until the data cleaning step, we were primarily working on single data sources and making them meaningful for future use. However, in the real world, most of the time, data is fragmented and stored in multiple disparate data stores, and to support use cases such as building personalization or recommendation solutions or building Customer 360s or log forensics, we need to join the data from different data stores.

For example, to build a Customer 360 solution, you need data from the Customer Relationship Manager (CRM) systems, clickstream logs, relational databases, and so on.

So, in the data enrichment step, we build the process that will enhance the raw data with relevant data obtained from different sources.

Data validation

There is a very interesting term in computer science called garbage in, garbage out (GIGO). GIGO is the concept that flawed or defective (garbage) input data produces defective output.

In other words, the quality of the output is determined by the quality of the input. So, if we provide bad data as input, we will get inaccurate results.

In the data validation step, we address this issue by performing various data quality checks:

  • Business validation of data accuracy
  • Validate data security
  • Validate result consistency across the entire dataset
  • Validate data quality by validating data quality checks such as the following:
    • Number of records
    • Duplicate values
    • Missing values
    • Outliers
    • Distinct values
    • Unique values
    • Correlation

There is a lot of overlap between data cleaning and data validation and yes, there are a lot of similarities between these two processes. However, data validation is done on the resulting dataset, while data cleaning is primarily done on the raw dataset.

Data publishing

After completing all the data wrangling steps, the data is ready to be used for analytics so that it can solve business problems.

So, the final step is to publish the data to the end user with the required access and permission.

In this step, we primarily concentrate on how the data is being exposed to the end user and where the final data gets stored – that is, in a relational database, a data warehouse, curated or user zones in a data lake, or through the Secure File Transfer Protocol (SFTP).

The choice of data storage depends on the tool through which the end user is looking to access the data. For example, if the end user is looking to access data through BI tools such as Amazon QuickSight, Power BI, Informatica, and so on, a relational data store will be an ideal choice. If it is accessed by a data scientist, ideally, it should be stored in an object store.

We will learn about the different kinds of data stores we can use to store raw and wrangled data later in this book.

In this section, we learned about the various steps of the data wrangling process through our friends James and Jean and how these steps may or may not vary based on personas. Now, let’s understand the best practices for data wrangling.

You have been reading a chapter from
Data Wrangling on AWS
Published in: Jul 2023
Publisher: Packt
ISBN-13: 9781801810906
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image