Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Synapse Analytics Cookbook

You're reading from   Azure Synapse Analytics Cookbook Implement a limitless analytical platform using effective recipes for Azure Synapse

Arrow left icon
Product type Paperback
Published in Apr 2022
Publisher Packt
ISBN-13 9781803231501
Length 238 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Gaurav Agarwal(BLR) Gaurav Agarwal(BLR)
Author Profile Icon Gaurav Agarwal(BLR)
Gaurav Agarwal(BLR)
Meenakshi Muralidharan Meenakshi Muralidharan
Author Profile Icon Meenakshi Muralidharan
Meenakshi Muralidharan
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Chapter 1: Choosing the Optimal Method for Loading Data to Synapse 2. Chapter 2: Creating Robust Data Pipelines and Data Transformation FREE CHAPTER 3. Chapter 3: Processing Data Optimally across Multiple Nodes 4. Chapter 4: Engineering Real-Time Analytics with Azure Synapse Link Using Cosmos DB 5. Chapter 5: Data Transformation and Processing with Synapse Notebooks 6. Chapter 6: Enriching Data Using the Azure ML AutoML Regression Model 7. Chapter 7: Visualizing and Reporting Petabytes of Data 8. Chapter 8: Data Cataloging and Governance 9. Chapter 9: MPP Platform Migration to Synapse 10. Other Books You May Enjoy

Performing read-write operations to a Parquet file using Spark in Synapse

Apache Parquet is a columnar file format that is supported by many big data processing systems and is the most efficient file format for storing data. Most of the Hadoop and big data world uses Parquet to a large extent. The advantage is the efficient data compression support, which enhances the performance of complex data.

Spark supports both reading and writing Parquet files because it reduces the underlying data storage. Since it occupies less storage, it actually reduces I/O operations and consumes less memory.

In this section, we will learn about reading Parquet files and writing to Parquet files. Reading and writing to a Parquet file with PySpark code is straightforward.

Getting ready

We will be using a public dataset for our scenario. This dataset will consist of New York yellow taxi trip data; this includes attributes such as trip distances, itemized fares, rate types, payment types, pick...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image