Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Machine Learning with Spark

You're reading from   Machine Learning with Spark Develop intelligent, distributed machine learning systems

Arrow left icon
Product type Paperback
Published in Apr 2017
Publisher Packt
ISBN-13 9781785889936
Length 532 pages
Edition 2nd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Manpreet Singh Ghotra Manpreet Singh Ghotra
Author Profile Icon Manpreet Singh Ghotra
Manpreet Singh Ghotra
Rajdeep Dua Rajdeep Dua
Author Profile Icon Rajdeep Dua
Rajdeep Dua
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Up and Running with Spark FREE CHAPTER 2. Math for Machine Learning 3. Designing a Machine Learning System 4. Obtaining, Processing, and Preparing Data with Spark 5. Building a Recommendation Engine with Spark 6. Building a Classification Model with Spark 7. Building a Regression Model with Spark 8. Building a Clustering Model with Spark 9. Dimensionality Reduction with Spark 10. Advanced Text Processing with Spark 11. Real-Time Machine Learning with Spark Streaming 12. Pipeline APIs for Spark ML

The first step to a Spark program in Python

Spark's Python API exposes virtually all the functionalities of Spark's Scala API in the Python language. There are some features that are not yet supported (for example, graph processing with GraphX and a few API methods here and there). Refer to the Python section of Spark Programming Guide (http://spark.apache.org/docs/latest/programming-guide.html) for more details.

PySpark is built using Spark's Java API. Data is processed in native Python, cached, and shuffled in JVM. Python driver program's SparkContext uses Py4J to launch a JVM and create a JavaSparkContext. The driver uses Py4J for local communication between the Python and Java SparkContext objects. RDD transformations in Python map to transformations on PythonRDD objects in Java. PythonRDD object launches Python sub-processes on remote worker machines, communicate with them using pipes. These sub-processes are used to send the user's code and to process data.

Following on from the preceding examples, we will now write a Python version. We assume that you have Python version 2.6 and higher installed on your system (for example, most Linux and Mac OS X systems come with Python preinstalled).

The example program is included in the sample code for this chapter, in the directory named python-spark-app, which also contains the CSV data file under the data subdirectory. The project contains a script, pythonapp.py, provided here.

A simple Spark app in Python:

from pyspark import SparkContext

sc = SparkContext("local[2]", "First Spark App")
# we take the raw data in CSV format and convert it into a set of
records of the form (user, product, price)
data = sc.textFile("data/UserPurchaseHistory.csv").map(lambda
line: line.split(",")).map(lambda record: (record[0], record[1],
record[2]))
# let's count the number of purchases
numPurchases = data.count()
# let's count how many unique users made purchases
uniqueUsers = data.map(lambda record: record[0]).distinct().count()
# let's sum up our total revenue
totalRevenue = data.map(lambda record: float(record[2])).sum()
# let's find our most popular product
products = data.map(lambda record: (record[1],
1.0)).reduceByKey(lambda a, b: a + b).collect()
mostPopular = sorted(products, key=lambda x: x[1], reverse=True)[0]

print "Total purchases: %d" % numPurchases
print "Unique users: %d" % uniqueUsers
print "Total revenue: %2.2f" % totalRevenue
print "Most popular product: %s with %d purchases" %
(mostPopular[0], mostPopular[1])

If you compare the Scala and Python versions of our program, you will see that generally, the syntax looks very similar. One key difference is how we express anonymous functions (also called lambda functions; hence, the use of this keyword for the Python syntax). In Scala, we've seen that an anonymous function mapping an input x to an output y is expressed as x => y, while in Python, it is lambda x: y. In the highlighted line in the preceding code, we are applying an anonymous function that maps two inputs, a and b, generally of the same type, to an output. In this case, the function that we apply is the plus function; hence, lambda a, b: a + b.

The best way to run the script is to run the following command from the base directory of the sample project:

 $SPARK_HOME/bin/spark-submit pythonapp.py

Here, the SPARK_HOME variable should be replaced with the path of the directory in which you originally unpacked the Spark prebuilt binary package at the start of this chapter.

Upon running the script, you should see output similar to that of the Scala and Java examples, with the results of our computation being the same:

...
14/01/30 11:43:47 INFO SparkContext: Job finished: collect at
pythonapp.py:14, took 0.050251 s

Total purchases: 5
Unique users: 4
Total revenue: 39.91
Most popular product: iPhone Cover with 2 purchases
You have been reading a chapter from
Machine Learning with Spark - Second Edition
Published in: Apr 2017
Publisher: Packt
ISBN-13: 9781785889936
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image