Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Building ETL Pipelines with Python

You're reading from   Building ETL Pipelines with Python Create and deploy enterprise-ready ETL pipelines by employing modern methods

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781804615256
Length 246 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Brij Kishore Pandey Brij Kishore Pandey
Author Profile Icon Brij Kishore Pandey
Brij Kishore Pandey
Emily Ro Schoof Emily Ro Schoof
Author Profile Icon Emily Ro Schoof
Emily Ro Schoof
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1:Introduction to ETL, Data Pipelines, and Design Principles
2. Chapter 1: A Primer on Python and the Development Environment FREE CHAPTER 3. Chapter 2: Understanding the ETL Process and Data Pipelines 4. Chapter 3: Design Principles for Creating Scalable and Resilient Pipelines 5. Part 2:Designing ETL Pipelines with Python
6. Chapter 4: Sourcing Insightful Data and Data Extraction Strategies 7. Chapter 5: Data Cleansing and Transformation 8. Chapter 6: Loading Transformed Data 9. Chapter 7: Tutorial – Building an End-to-End ETL Pipeline in Python 10. Chapter 8: Powerful ETL Libraries and Tools in Python 11. Part 3:Creating ETL Pipelines in AWS
12. Chapter 9: A Primer on AWS Tools for ETL Processes 13. Chapter 10: Tutorial – Creating an ETL Pipeline in AWS 14. Chapter 11: Building Robust Deployment Pipelines in AWS 15. Part 4:Automating and Scaling ETL Pipelines
16. Chapter 12: Orchestration and Scaling in ETL Pipelines 17. Chapter 13: Testing Strategies for ETL Pipelines 18. Chapter 14: Best Practices for ETL Pipelines 19. Chapter 15: Use Cases and Further Reading 20. Index 21. Other Books You May Enjoy

Preface

We’re living in an era where the volume of generated data is rapidly outgrowing its practicality in its unprocessed state. In order to gain valuable insights from this data, it needs to be transformed into digestible pieces of information. There is no shortage of quick and easy ways to accomplish this using numerous licensed tools on the market to create “plug-and-play” data ingestion environments. However, the data requirements of industry-level projects often exceed the capabilities of existing tools and technologies. This is because the processing capacity needed to handle large amounts of data increases exponentially, and the cost of processing also increases exponentially. As a result, it can be prohibitively expensive to process the data requirements of industry-level projects using traditional methods.

This growing demand for highly customizable data processing at a reasonable price point goes hand in hand with a growing demand for skilled data engineers. Data engineers handle the extraction, transformation, and loading of data, which is commonly referred to as the Extract, Transform, and Load (ETL) process. ETL workflows, also known as ETL pipelines, enable data engineers to create customized solutions that are not only strategic but also enable developers to create flexible deployment environments that can scale up or down depending on any data requirement fluctuations that occur between pipeline runs.

Popular programming languages, such as SQL, Python, R, and Spark, are some of the most popular languages used to develop custom data solutions. Python, in particular, has emerged as a frontrunner. This is mainly because of its adaptability and how user-friendly it is, making collaboration easier for developers. In simpler terms, think of Python as the “universal tool” in the data world – it’s flexible and people love working with it.

Building ETL Pipelines in Python introduces the fundamentals of data pipelines using open source tools and technologies in Python. It provides a comprehensive guide to creating robust, scalable ETL pipelines broken down into clear and repeatable steps. Our goal for this book is to provide readers with a resource that combines knowledge and practical application to encourage the pursuit of a career in data.

Our aim with this book is to offer you a comprehensive guide as you explore the diverse tools and technologies Python provides to create customized data pipelines. By the time you finish reading, you will have first-hand experience developing robust, scalable, and resilient pipelines using Python. These pipelines can seamlessly transition into a production environment, often without needing further adjustments.

We are excited to embark on this learning journey with you, sharing insights and expertise that can empower you to transform the way you approach data pipeline development. Let’s get to it!

lock icon The rest of the chapter is locked
Next Section arrow right
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image