Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Getting Started with DuckDB

You're reading from   Getting Started with DuckDB A practical guide for accelerating your data science, data analytics, and data engineering workflows

Arrow left icon
Product type Paperback
Published in Jun 2024
Publisher Packt
ISBN-13 9781803241005
Length 382 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Ned Letcher Ned Letcher
Author Profile Icon Ned Letcher
Ned Letcher
Simon Aubury Simon Aubury
Author Profile Icon Simon Aubury
Simon Aubury
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Chapter 1: An Introduction to DuckDB 2. Chapter 2: Loading Data into DuckDB FREE CHAPTER 3. Chapter 3: Data Manipulation with DuckDB 4. Chapter 4: DuckDB Operations and Performance 5. Chapter 5: DuckDB Extensions 6. Chapter 6: Semi-Structured Data Manipulation 7. Chapter 7: Setting up the DuckDB Python Client 8. Chapter 8: Exploring DuckDB’s Python API 9. Chapter 9: Exploring DuckDB’s R API 10. Chapter 10: Using DuckDB Effectively 11. Chapter 11: Hands-On Exploratory Data Analysis with DuckDB 12. Chapter 12: DuckDB – The Wider Pond 13. Index 14. Other Books You May Enjoy

Working with Parquet files

Apache Parquet is an open source file format that is designed for efficient storage and retrieval of data. Their columnar-oriented format combined with the use of compression to reduce storage space and I/O cost of reading and writing make these files well suited for storing and retrieving large amounts of structured and semi-structured data for analytical applications.

Parquet files are encoded in a binary format, so you cannot view them as text files as you might with a CSV file. Parquet files are self-describing in that each file contains both data and metadata describing the schema of the data within the file. This means that column names, their data types, and summary information about the number of rows and columns are encoded within the file. This contrasts with CSV and JSON files, which contain purely text data without an embedded schema. In addition to performance gains, this is one of the notable benefits of Parquet files, as their built-in schema...

You have been reading a chapter from
Getting Started with DuckDB
Published in: Jun 2024
Publisher: Packt
ISBN-13: 9781803241005
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image