Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Data Engineering with Google Cloud Platform

You're reading from   Data Engineering with Google Cloud Platform A guide to leveling up as a data engineer by building a scalable data platform with Google Cloud

Arrow left icon
Product type Paperback
Published in Apr 2024
Publisher Packt
ISBN-13 9781835080115
Length 476 pages
Edition 2nd Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Adi Wijaya Adi Wijaya
Author Profile Icon Adi Wijaya
Adi Wijaya
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Part 1: Getting Started with Data Engineering with GCP
2. Chapter 1: Fundamentals of Data Engineering FREE CHAPTER 3. Chapter 2: Big Data Capabilities on GCP 4. Part 2: Build Solutions with GCP Components
5. Chapter 3: Building a Data Warehouse in BigQuery 6. Chapter 4: Building Workflows for Batch Data Loading Using Cloud Composer 7. Chapter 5: Building a Data Lake Using Dataproc 8. Chapter 6: Processing Streaming Data with Pub/Sub and Dataflow 9. Chapter 7: Visualizing Data to Make Data-Driven Decisions with Looker Studio 10. Chapter 8: Building Machine Learning Solutions on GCP 11. Part 3: Key Strategies for Architecting Top-Notch Solutions
12. Chapter 9: User and Project Management in GCP 13. Chapter 10: Data Governance in GCP 14. Chapter 11: Cost Strategy in GCP 15. Chapter 12: CI/CD on GCP for Data Engineers 16. Chapter 13: Boosting Your Confidence as a Data Engineer 17. Index 18. Other Books You May Enjoy

Understanding the concept of an ephemeral cluster

After running the previous exercises, you may notice that Spark is very useful for processing data, but it has little to no dependence on HDFS. It’s very convenient to use data as is from GCS or BigQuery compared to using HDFS.

What does this mean? It means that we may choose not to store any data in the Hadoop cluster (more specifically, in HDFS) and only use the cluster to run jobs. For cost efficiency, we can smartly turn on and off the cluster only when a job is running.

Furthermore, we can destroy the entire Hadoop cluster when the job is finished and create a new one when we submit a new job. This concept is what’s called an ephemeral cluster.

An ephemeral cluster means the cluster is not permanent. A cluster will only exist when it’s running jobs. There are two main advantages to using this approach:

  • Highly efficient infrastructure cost: With this approach, you don’t need to have a...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image