Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Frank Kane's Taming Big Data with Apache Spark and Python
Frank Kane's Taming Big Data with Apache Spark and Python

Frank Kane's Taming Big Data with Apache Spark and Python: Real-world examples to help you analyze large datasets with Apache Spark

Arrow left icon
Profile Icon Frank Kane
Arrow right icon
zł59.99 zł141.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8 (11 Ratings)
eBook Jun 2017 296 pages 1st Edition
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial
Arrow left icon
Profile Icon Frank Kane
Arrow right icon
zł59.99 zł141.99
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8 (11 Ratings)
eBook Jun 2017 296 pages 1st Edition
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial
eBook
zł59.99 zł141.99
Paperback
zł177.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Frank Kane's Taming Big Data with Apache Spark and Python

Spark Basics and Spark Examples

The high-level introduction to Spark in this chapter will help you understand what Spark is all about, what's it for, who uses it, why is it so popular, and why is it so hot. Let's explore.

What is Spark?

According to Apache, Spark is a fast and general engine for large-scale data processing. This is actually a really good summary of what it's all about. If you have a really massive dataset that can represent anything - weblogs, genomics data, you name it - Spark can slice and dice that data up. It can distribute the processing among a huge cluster of computers, taking a data analysis problem that's just too big to run on one machine and divide and conquer it by splitting it up among multiple machines.

Spark is scalable

The way that Spark scales data analysis problems is, it runs on top of a cluster manager, so your actual Spark scripts are just everyday scripts written in Python, Java, or Scala; they...

The Resilient Distributed Dataset (RDD)

In this section, we'll stop being all high level and hand-wavy and go into a little bit more depth about how Spark works from a technical standpoint. In Spark, under the hood, there's something called the Resilient Distributed Dataset object, which is like a core object that everything in Spark revolves around. Even for the libraries built on top of Spark, such as Spark SQL or MLlib, you're also using RDDs under the hood or extensions to the RDD objects to make it look like something a little bit more structured. If you understand what an RDD is in Spark, you've come ninety per cent of the way to understanding Spark.

What is the RDD?

Let's talk about the RDD...

Ratings histogram walk-through

Remember the RatingsHistogram code that we ran for your first Spark program? Well, let's take a closer look at that and figure out what's actually going on under the hood with it. Understanding concepts is all well and good, but nothing beats looking at some real examples. Let's go back to the RatingsHistogram example that we started off with in this book. We'll break it down and understand exactly what it's doing under the hood and how it's using our RDDs to actually get the results for the RatingsHistogram data.

Understanding the code

The first couple of lines are just boilerplate stuff. One thing you'll see in every Python Spark script is the import statement...

Key/value RDDs and the average friends by age example

A powerful thing to do with RDDs is to put more structured data into it. One thing we can do is put key/value pairs of information into Spark RDDs and then we can treat it like a very simple database, if you will. So let's walk through an example where we have a fabricated social network set of data, and we'll analyze that data to figure out the average number of friends, broken down by age of people in this fake social network. We'll use key/value pairs and RDDs to do that. Let's cover the concepts, and then we'll come back later and actually run the code.

Key/value concepts - RDDs can hold key/value pairs

RDDs can hold key/value pairs in addition...

Running the average friends by age example

Okay, let's make it real, let's actually get some real code and some real data and analyze the average number of friends by age in our fabricated dataset here, and see what we come up with.

At this point, you should go to the download package for this book, if you haven't already, and download two things: one is the friends-by-age Python script, and the other is the fakefriends.csv file, which is my randomly generated data that's completely fictitious, but useful for illustration. So go take care of that now. When you're done, move it into your C:\SparkCourse folder or wherever you're installing stuff for this course. At this point in the course, your SparkCourse folder should look like this:

At this moment, we need friends-by-age.py and fakefriends.csv, so let's double-click on the friends-by-age.py...

What is Spark?


According to Apache, Spark is a fast and general engine for large-scale data processing. This is actually a really good summary of what it's all about. If you have a really massive dataset that can represent anything - weblogs, genomics data, you name it - Spark can slice and dice that data up. It can distribute the processing among a huge cluster of computers, taking a data analysis problem that's just too big to run on one machine and divide and conquer it by splitting it up among multiple machines.

Spark is scalable

The way that Spark scales data analysis problems is, it runs on top of a cluster manager, so your actual Spark scripts are just everyday scripts written in Python, Java, or Scala; they behave just like any other script. Your "driver program" is what we call it, and it will run on your desktop or on one master node of your cluster. However, under the hood, when you run it, Spark knows how to take the work and actually farm it out to different computers on your...

The Resilient Distributed Dataset (RDD)


In this section, we'll stop being all high level and hand-wavy and go into a little bit more depth about how Spark works from a technical standpoint. In Spark, under the hood, there's something called the Resilient Distributed Dataset object, which is like a core object that everything in Spark revolves around. Even for the libraries built on top of Spark, such as Spark SQL or MLlib, you're also using RDDs under the hood or extensions to the RDD objects to make it look like something a little bit more structured. If you understand what an RDD is in Spark, you've come ninety per cent of the way to understanding Spark.

What is the RDD?

Let's talk about the RDD in a reverse order because I'm weird like that. So, fundamentally, the RDD is a dataset, and it is an abstraction for a giant set of data, which is the main thing you need to know as a developer. What you'll do is to set up RDD objects and the RDD will load them up with big datasets and then call...

Ratings histogram walk-through


Remember the RatingsHistogram code that we ran for your first Spark program? Well, let's take a closer look at that and figure out what's actually going on under the hood with it. Understanding concepts is all well and good, but nothing beats looking at some real examples. Let's go back to the RatingsHistogram example that we started off with in this book. We'll break it down and understand exactly what it's doing under the hood and how it's using our RDDs to actually get the results for the RatingsHistogram data.

Understanding the code

The first couple of lines are just boilerplate stuff. One thing you'll see in every Python Spark script is the import statement to import SparkConf and SparkContext from the pyspark library that Spark includes. You will, at a minimum, need those two objects:

from pyspark import SparkConf, SparkContext 
import collections 

SparkContext, as we talked about earlier, is the fundamental starting point that the Spark framework gives you...

Key/value RDDs and the average friends by age example


A powerful thing to do with RDDs is to put more structured data into it. One thing we can do is put key/value pairs of information into Spark RDDs and then we can treat it like a very simple database, if you will. So let's walk through an example where we have a fabricated social network set of data, and we'll analyze that data to figure out the average number of friends, broken down by age of people in this fake social network. We'll use key/value pairs and RDDs to do that. Let's cover the concepts, and then we'll come back later and actually run the code.

Key/value concepts - RDDs can hold key/value pairs

RDDs can hold key/value pairs in addition to just single values. In our previous examples, we looked at RDDs that included lines of text for an input data file or that contained movie ratings. In those cases, every element of the RDD contained a single value, either a line of text or a movie rating, but you can also store more structured...

Running the average friends by age example


Okay, let's make it real, let's actually get some real code and some real data and analyze the average number of friends by age in our fabricated dataset here, and see what we come up with.

At this point, you should go to the download package for this book, if you haven't already, and download two things: one is the friends-by-age Python script, and the other is the fakefriends.csv file, which is my randomly generated data that's completely fictitious, but useful for illustration. So go take care of that now. When you're done, move it into your C:\SparkCourse folder or wherever you're installing stuff for this course. At this point in the course, your SparkCourse folder should look like this:

At this moment, we need friends-by-age.py and fakefriends.csv, so let's double-click on the friends-by-age.py script, and Enthought Canopy or your Python environment of choice should come up. Here we have it:

Examining the script

So let's just review again what...

Filtering RDDs and the minimum temperature by location example


Now we're going to introduce the concept of filters on RDDs, a way to strip down an RDD into the information we care about and create a smaller RDD from it. We'll do this in the context of another real example. We have some real weather data from the year 1800, and we're going to find out the minimum temperature observed at various weather stations in that year. While we're at it, we'll also use the concept of key/value RDDs as well as part of this exercise. So let's go through the concepts, walk through the code and get started.

What is filter()

Filter is just another function you can call on a mapper, which transforms it by removing information that you don't care about. In our example, the raw weather data actually includes things such as minimum temperatures observed and maximum temperatures for every day, and also the amount of precipitation observed for every day. However, all we care about for the problem we're trying to...

Running the minimum temperature example and modifying it for maximums


Let's see this filter in action and find out the minimum temperature observed for each weather station in the year 1800. Go to the download package for this book and download two things: the min-temperatures Python script and the 1800.csv data file, which contains our weather information. Go ahead and download these now. When you're done, place them into your C:SparkCourse folder or wherever you're storing all the stuff for this course:

When you're ready, go ahead and double-click on min-temperatures.py and open that up in your editor. I think it makes a little bit more sense once you see this all together. Feel free to take some time to wrap your head around it and figure out what's going on here and then I'll walk you through it.

Examining the min-temperatures script

We start off with the usual boilerplate stuff, importing what we need from pyspark and setting up a SparkContext object that we're going to call MinTemperatures...

Running the maximum temperature by location example


I hope you did your homework. You should have had a crack at finding the maximum temperature for the year for each weather station instead of a minimum temperature, using our min-temperatures Python script as a starting point. If you haven't, go give it a try! Really, the only way you're going to learn this stuff is by diving in there and messing with the code yourself. I very strongly encourage you to give this a try-it's not hard. If you have done that though, let's move forward and take a look at my results. We can compare that to yours and see if you got it right.

Hopefully, you didn't have too much of a hard time figuring out the maximum temperature observed at the each weather station for the year 1800; it just involved a few changes. If you go to the download package for this book, you can download my solution to it, which is the max-temperatures script. If you like, you can throw that into your SparkCourse directory and compare your...

Counting word occurrences using flatmap()


We'll do a really common Spark and MapReduce example of dealing with a book or text file. We'll count all the words in a text file and find out how many times each word occurs within that text. We'll put a little bit of twist on this task and work our way up to doing more and more complex twists later on. The first thing we need to do is go over the difference again between map and flatMap, because using flatMap and Spark is going to be the key to doing this quickly and easily. Let's talk about that and then jump into some code later on and see it in action.

Map versus flatmap

For the next few sections in this book, we'll look at your standard "count the words in a text file" sample that you see in a lot of these sorts of books, but we're going to do a little bit of a twist. We'll work our way up from a really simple implementation of counting the words, and keep adding more and more stuff to make that even better as we go along. So, to start off with...

Improving the word-count script with regular expressions


The main problem with the initial results from our word-count script is that we didn't account for things such as punctuation and capitalization. There are fancy ways to deal with that problem in text processing, but we're going to use a simple way for now. We'll use something called regular expressions in Python. So let's look at how that works, then run it and see it in action.

Text normalization

In the previous section, we had a first crack at counting the number of times each word occurred in our book, but the results weren't that great. We had each individual word that had different capitalization or punctuation surrounding it being counted as a word of its own, and that's not what we want. We want each word to be counted only once, no matter how it's capitalized or what punctuation might surround it. We don't want duplicate words showing up in there. There are toolkits you can get for Python such as NLTK (Natural Language Toolkit...

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Understand how Spark can be distributed across computing clusters
  • Develop and run Spark jobs efficiently using Python
  • A hands-on tutorial by Frank Kane with over 15 real-world examples teaching you Big Data processing with Spark

Description

Frank Kane’s Taming Big Data with Apache Spark and Python is your companion to learning Apache Spark in a hands-on manner. Frank will start you off by teaching you how to set up Spark on a single system or on a cluster, and you’ll soon move on to analyzing large data sets using Spark RDD, and developing and running effective Spark jobs quickly using Python. Apache Spark has emerged as the next big thing in the Big Data domain – quickly rising from an ascending technology to an established superstar in just a matter of years. Spark allows you to quickly extract actionable insights from large amounts of data, on a real-time basis, making it an essential tool in many modern businesses. Frank has packed this book with over 15 interactive, fun-filled examples relevant to the real world, and he will empower you to understand the Spark ecosystem and implement production-grade real-time Spark projects with ease.

Who is this book for?

If you are a data scientist or data analyst who wants to learn Big Data processing using Apache Spark and Python, this book is for you. If you have some programming experience in Python, and want to learn how to process large amounts of data using Apache Spark, Frank Kane’s Taming Big Data with Apache Spark and Python will also help you.

What you will learn

  • Find out how you can identify Big Data problems as Spark problems
  • Install and run Apache Spark on your computer or on a cluster
  • Analyze large data sets across many CPUs using Spark's Resilient Distributed Datasets
  • Implement machine learning on Spark using the MLlib library
  • Process continuous streams of data in real time using the Spark streaming module
  • Perform complex network analysis using Spark's GraphX library
  • Use Amazon s Elastic MapReduce service to run your Spark jobs on a cluster

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 30, 2017
Length: 296 pages
Edition : 1st
Language : English
ISBN-13 : 9781787288300
Category :
Languages :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jun 30, 2017
Length: 296 pages
Edition : 1st
Language : English
ISBN-13 : 9781787288300
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just zł20 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 718.97
Python: End-to-end Data Analysis
zł362.99
Hands-On Data Science and Python Machine Learning
zł177.99
Frank Kane's Taming Big Data with Apache Spark and Python
zł177.99
Total 718.97 Stars icon
Banner background image

Table of Contents

7 Chapters
Getting Started with Spark Chevron down icon Chevron up icon
Spark Basics and Spark Examples Chevron down icon Chevron up icon
Advanced Examples of Spark Programs Chevron down icon Chevron up icon
Running Spark on a Cluster Chevron down icon Chevron up icon
SparkSQL, DataFrames, and DataSets Chevron down icon Chevron up icon
Other Spark Technologies and Libraries Chevron down icon Chevron up icon
Where to Go From Here? – Learning More About Spark and Data Science Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(11 Ratings)
5 star 54.5%
4 star 9.1%
3 star 9.1%
2 star 18.2%
1 star 9.1%
Filter icon Filter
Top Reviews

Filter reviews by




Eduardo Polanco Dec 13, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Exactly what I was looking for. I wanted to learn spark with clear easy to follow examples and this book delivers. The author does a great job of organizing every chapter and thoroughly explaining with easy to follow examples.
Amazon Verified review Amazon
BK May 13, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
If your learning style is from doing hands on, this is the best book. The Author explains the installation procedure very clearly starting with what needed to be downloaded. There are screen shots at required steps so that we don't get lost. After the installation, he explains so many cases and elaborates every line of code. For a complete novice like me who comes from traditional RDBMS background, the mystery around Big Data has vanished. It is good to learn some Python before. Hey, if you are getting into Big Data, Spark domain and don't like learning Java - Python is the way to go anyway.Such a lucid style of imparting knowledge, big Thank You to Mr.Kane.
Amazon Verified review Amazon
Jim Woods Dec 22, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Excellent book to learn PySpark. This book will take you through all the steps required to set up PySpark, to explaining the foundational concepts, and then to several well explained examples. Highly recommend!
Amazon Verified review Amazon
Mohammed Ghufran Ali Mar 31, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Good thing about this book is that most of the concepts are explained with example.All of the sample scripts run fine and are well documented.i wish Mr. Frank publish book on advanced python spark and more around machine learning concepts with examples.
Amazon Verified review Amazon
Balaji Santhana Krishnan Sep 07, 2019
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great book for beginners.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.