Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Getting Started with DuckDB
Getting Started with DuckDB

Getting Started with DuckDB: A practical guide for accelerating your data science, data analytics, and data engineering workflows

Arrow left icon
Profile Icon Simon Aubury Profile Icon Ned Letcher
Arrow right icon
$29.99 $43.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (1 Ratings)
eBook Jun 2024 382 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Simon Aubury Profile Icon Ned Letcher
Arrow right icon
$29.99 $43.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (1 Ratings)
eBook Jun 2024 382 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Getting Started with DuckDB

Loading Data into DuckDB

DuckDB is a flexible analytical database that can handle a variety of data types and workloads. A common task when working with DuckDB is loading data from external data sources, such as comma-separated values (CSV), JavaScript Object Notation (JSON), and Apache Parquet files. In this chapter, we will introduce the basic concepts and methods for loading data into DuckDB and provide some examples and best practices to help you get started.

You will learn how to load data into DuckDB from external data sources, how to create tables using SQL commands, and how to load data from various sources and formats, including CSV, JSON, and Parquet files, along with exploring some of the considerations when working with compressed columnar formats. We will also use DuckDB to query and analyze a public dataset, in addition to reviewing how we can export data from DuckDB.

In this chapter, we’re going to cover the following main topics:

  • Loading CSV files...

Technical requirements

We need to get data into DuckDB so that it can be queried, transformed, and analyzed. The files and example data for these exercises are available in the chapter_02 folder at https://github.com/PacktPublishing/Getting-Started-with-DuckDB/tree/main/chapter_02.

Important note

In this chapter, we will be learning how to load data into (and export data from) DuckDB using local files. DuckDB can also read and write data located in Simple Storage Service (S3)-compatible object stores, including AWS S3, Google Cloud Storage, and Azure Blob storage. We will explore using DuckDB to interact with object stores in Chapter 5.

Loading CSV files

CSV files are ubiquitous in the world of analytical data, which is why DuckDB comes with a powerful and flexible built-in CSV parser. The appeal of this simple text-based format is that CSV files are easy to inspect, and their tabular format is readily comprehended. While they are straightforward to produce and share, there are, however, often challenges when working with CSV files. Notably, they come in a wide variety of dialects and, often, non-standard variations. For example, despite their name, they sometimes use characters other than commas for delimiting each field—such as tabs, they may or may not have a header row with column names, and there are different approaches to escaping special characters, such as delimiters and quotes. When parsing a CSV file, the specific format of a CSV file can often be inferred but may need to be specified manually. Furthermore, CSV files don’t contain an embedded schema, meaning that conversion from text values...

Loading JSON files

JSON is a popular and open file format for storing and exchanging data and has good integration with many programming languages, libraries, and data systems. A JSON object is written inside curly braces and can contain multiple name-value pairs. A name-value pair consists of an attribute name in double quotes, followed by a colon, followed by a value. JSON values can include strings, numbers, and Boolean data types, as well as nested objects and array data types, which are represented as comma-separated sequences of values, wrapped with square braces. Here’s an example of a simple JSON object:

{"food_name":"Hawaiian Pizza", "quantity":6}

DuckDB can read JSON files with the read_json function in a variety of formats. In the wild, you may encounter a range of different types of JSON files, with records represented using different conventions. Two common flavors of JSON data that you may encounter in the wild are newline-delimited...

Working with Parquet files

Apache Parquet is an open source file format that is designed for efficient storage and retrieval of data. Their columnar-oriented format combined with the use of compression to reduce storage space and I/O cost of reading and writing make these files well suited for storing and retrieving large amounts of structured and semi-structured data for analytical applications.

Parquet files are encoded in a binary format, so you cannot view them as text files as you might with a CSV file. Parquet files are self-describing in that each file contains both data and metadata describing the schema of the data within the file. This means that column names, their data types, and summary information about the number of rows and columns are encoded within the file. This contrasts with CSV and JSON files, which contain purely text data without an embedded schema. In addition to performance gains, this is one of the notable benefits of Parquet files, as their built-in schema...

Exploring public datasets

Public datasets are collections of data that are made available to the public by various sources, such as governments, organizations, and researchers. Public datasets can be useful for analyzing trends and patterns in different domains, such as health, education, environment, or social impact.

DuckDB is a powerful tool for exploring, understanding, and gaining insights from public datasets. In this section, we’ll work with a public dataset that has been made available in CSV format. We’ll use DuckDB to load it in an appropriate form, summarize it, and export it to another format. This worked example will allow us to showcase some of DuckDB’s versatile features that make it well suited for analytical workflows.

Bike-share station readings

We are going to be exploring the Melbourne Bike Share dataset, which provides historical data from the Melbourne Bike Share service, which operated from 2010 to 2019, and is made available by...

Exporting data

So far, we have been importing data from a variety of data sources into DuckDB. We also often need to export data from DuckDB into a file, perhaps to transfer to another database system or to share our data with others. Let’s discuss how we can achieve that in the following sections.

Exporting a table into a CSV file

We can use the COPY ... TO command to export data from DuckDB to an external CSV, JSON, or Parquet file. This command can be called either with a table name or a query and will export the corresponding data to disk in the desired file format. Let’s try this out by creating a subset of our bike-share dataset and exporting it as a CSV file.

We’ll first create a table called bike_readings_april containing bike rides that occurred in April:

CREATE OR REPLACE TABLE bike_readings_april AS
SELECT *
FROM bikes
WHERE RUNDATE BETWEEN '2017-04-01' AND '2017-04-30';

With the bike_readings_april table created, let...

Summary

In this chapter, we learned how to load data into and unload data from DuckDB. We saw data as text formats in the form of CSV and JSON, as well as the self-describing binary columnar format, Apache Parquet.

We learned techniques to format data during import and skip records with errors, and we saw how DuckDB can support a variety of changing schemas and data types. We also learned how to find, process, and summarize public datasets and saw how DuckDB can be used to export data for consumption by analytical systems.

Now that we know how to load data into DuckDB, the next chapter will cover techniques for using DuckDB for data manipulation in order to transform your data. You will learn how to clean and reshape data using SQL and use these approaches to manipulate data from different sources and formats. You will also see how to interact with data located on remote systems, such as data located on remote web servers.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Use DuckDB to rapidly load, transform, and query data across a range of sources and formats
  • Gain practical experience using SQL, Python, and R to effectively analyze data
  • Learn how open source tools and cloud services in the broader data ecosystem complement DuckDB’s versatile capabilities
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

DuckDB is a fast in-process analytical database. Getting Started with DuckDB offers a practical overview of its usage. You'll learn to load, transform, and query various data formats, including CSV, JSON, and Parquet. The book covers DuckDB's optimizations, SQL enhancements, and extensions for specialized applications. Working with examples in SQL, Python, and R, you'll explore analyzing public datasets and discover tools enhancing DuckDB workflows. This guide suits both experienced and new data practitioners, quickly equipping you to apply DuckDB's capabilities in analytical projects. You'll gain proficiency in using DuckDB for diverse tasks, enabling effective integration into your data workflows.

Who is this book for?

If you’re interested in expanding your analytical toolkit, this book is for you. It will be particularly valuable for data analysts wanting to rapidly explore and query complex data, data and software engineers looking for a lean and versatile data processing tool, along with data scientists needing a scalable data manipulation library that integrates seamlessly with Python and R. You will get the most from this book if you have some familiarity with SQL and foundational database concepts, as well as exposure to a programming language such as Python or R.

What you will learn

  • Understand the properties and applications of a columnar in-process database
  • Use SQL to load, transform, and query a range of data formats
  • Discover DuckDB's rich extensions and learn how to apply them
  • Use nested data types to model semi-structured data and extract and model JSON data
  • Integrate DuckDB into your Python and R analytical workflows
  • Effectively leverage DuckDB's convenient SQL enhancements
  • Explore the wider ecosystem and pathways for building DuckDB-powered data applications

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 24, 2024
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781803232539
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jun 24, 2024
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781803232539
Category :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 159.97
Getting Started with DuckDB
$54.99
Mastering Node.js Web Development
$49.99
Machine Learning with PyTorch and Scikit-Learn
$54.99
Total $ 159.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Chapter 1: An Introduction to DuckDB Chevron down icon Chevron up icon
Chapter 2: Loading Data into DuckDB Chevron down icon Chevron up icon
Chapter 3: Data Manipulation with DuckDB Chevron down icon Chevron up icon
Chapter 4: DuckDB Operations and Performance Chevron down icon Chevron up icon
Chapter 5: DuckDB Extensions Chevron down icon Chevron up icon
Chapter 6: Semi-Structured Data Manipulation Chevron down icon Chevron up icon
Chapter 7: Setting up the DuckDB Python Client Chevron down icon Chevron up icon
Chapter 8: Exploring DuckDB’s Python API Chevron down icon Chevron up icon
Chapter 9: Exploring DuckDB’s R API Chevron down icon Chevron up icon
Chapter 10: Using DuckDB Effectively Chevron down icon Chevron up icon
Chapter 11: Hands-On Exploratory Data Analysis with DuckDB Chevron down icon Chevron up icon
Chapter 12: DuckDB – The Wider Pond Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(1 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Vishnuvardhan Oct 17, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
"Getting Started with DuckDB" provides an excellent, hands-on introduction to DuckDB, showcasing its speed and versatility in data analytics and engineering. The practical examples and easy-to-follow explanations make it a valuable resource for anyone looking to enhance their workflows. Ideal for beginners and experienced professionals alike, it bridges the gap between theory and application effectively
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.