Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Engineering with AWS Cookbook

You're reading from   Data Engineering with AWS Cookbook A recipe-based approach to help you tackle data engineering problems with AWS services

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781805127284
Length 528 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (4):
Arrow left icon
Viquar Khan Viquar Khan
Author Profile Icon Viquar Khan
Viquar Khan
Gonzalo Herreros González Gonzalo Herreros González
Author Profile Icon Gonzalo Herreros González
Gonzalo Herreros González
Huda Nofal Huda Nofal
Author Profile Icon Huda Nofal
Huda Nofal
Trâm Ngọc Phạm Trâm Ngọc Phạm
Author Profile Icon Trâm Ngọc Phạm
Trâm Ngọc Phạm
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Chapter 1: Managing Data Lake Storage 2. Chapter 2: Sharing Your Data Across Environments and Accounts FREE CHAPTER 3. Chapter 3: Ingesting and Transforming Your Data with AWS Glue 4. Chapter 4: A Deep Dive into AWS Orchestration Frameworks 5. Chapter 5: Running Big Data Workloads with Amazon EMR 6. Chapter 6: Governing Your Platform 7. Chapter 7: Data Quality Management 8. Chapter 8: DevOps – Defining IaC and Building CI/CD Pipelines 9. Chapter 9: Monitoring Data Lake Cloud Infrastructure 10. Chapter 10: Building a Serving Layer with AWS Analytics Services 11. Chapter 11: Migrating to AWS – Steps, Strategies, and Best Practices for Modernizing Your Analytics and Big Data Workloads 12. Chapter 12: Harnessing the Power of AWS for Seamless Data Warehouse Migration 13. Chapter 13: Strategizing Hadoop Migrations – Cost, Data, and Workflow Modernization with AWS 14. Index 15. Other Books You May Enjoy

Handling security-sensitive data (PII and PHI)

Redshift data tokenization is a crucial aspect of data security and privacy within AWS Redshift to protect the confidentiality of critical data. Tokenization involves substituting sensitive information with unique identifiers or tokens, preserving the format and length of the original data without revealing the actual sensitive information. Tokenization is necessary to mitigate the risk of data breaches, comply with regulatory requirements, and maintain customer trust by ensuring their personal information is secure. Leveraging AWS Lambda user-defined functions (UDFs) for data tokenization in Redshift provides a scalable and efficient solution for protecting the data.

In this recipe, we will learn how to create a Lambda UDF to use in Redshift for tokenization.

Getting ready

To complete this recipe, you must have a Redshift cluster with a table in which you need to tokenize its data. You need to have a Cloud9 environment as well...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image