Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
In-Memory Analytics with Apache Arrow

You're reading from   In-Memory Analytics with Apache Arrow Perform fast and efficient data analytics on both flat and hierarchical structured data

Arrow left icon
Product type Paperback
Published in Jun 2022
Publisher Packt
ISBN-13 9781801071031
Length 392 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Matthew Topol Matthew Topol
Author Profile Icon Matthew Topol
Matthew Topol
Arrow right icon
View More author details
Toc

Table of Contents (16) Chapters Close

Preface 1. Section 1: Overview of What Arrow Is, its Capabilities, Benefits, and Goals
2. Chapter 1: Getting Started with Apache Arrow FREE CHAPTER 3. Chapter 2: Working with Key Arrow Specifications 4. Chapter 3: Data Science with Apache Arrow 5. Section 2: Interoperability with Arrow: pandas, Parquet, Flight, and Datasets
6. Chapter 4: Format and Memory Handling 7. Chapter 5: Crossing the Language Barrier with the Arrow C Data API 8. Chapter 6: Leveraging the Arrow Compute APIs 9. Chapter 7: Using the Arrow Datasets API 10. Chapter 8: Exploring Apache Arrow Flight RPC 11. Section 3: Real-World Examples, Use Cases, and Future Development
12. Chapter 9: Powered by Apache Arrow 13. Chapter 10: How to Leave Your Mark on Arrow 14. Chapter 11: Future Development and Plans 15. Other Books You May Enjoy

Learning about memory cartography

One draw of distributed systems such as Apache Spark is the ability to process very large datasets quickly. Sometimes, the dataset is so large that it can't even fit entirely in memory on a single machine! Having a distributed system that can break up the data into chunks and process them in parallel is then necessary since no individual machine would be able to load the whole dataset in memory at one time to operate on it. But what if you could process a huge, multiple-GB file while using almost no RAM at all? That's where memory mapping comes in.

Let's look to our NYC Taxi dataset once again for help with demonstrating this concept. The file named yellow_tripdata_2015-01.csv is approximately 1.8 GB in size, perfect to use as an example. By now, you should easily be able to read that CSV file in as an Arrow table and look at the schema. Now, let's say we wanted to find out and calculate the mean of the values in the total_amount...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image