Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Apache Spark Deep Learning Cookbook
Apache Spark Deep Learning Cookbook

Apache Spark Deep Learning Cookbook: Over 80 best practice recipes for the distributed training and deployment of neural networks using Keras and TensorFlow

Arrow left icon
Profile Icon Ahmed Sherif Profile Icon Ravindra
Arrow right icon
$19.99 per month
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.7 (6 Ratings)
Paperback Jul 2018 474 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Ahmed Sherif Profile Icon Ravindra
Arrow right icon
$19.99 per month
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.7 (6 Ratings)
Paperback Jul 2018 474 pages 1st Edition
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$29.99 $43.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Apache Spark Deep Learning Cookbook

Creating a Neural Network in Spark

In this chapter, the following recipes will be covered:

  • Creating a dataframe in PySpark
  • Manipulating columns in a PySpark dataframe
  • Converting a PySpark dataframe into an array
  • Visualizing the array in a scatterplot
  • Setting up weights and biases for input into the neural network
  • Normalizing the input data for the neural network
  • Validating array for optimal neural network performance
  • Setting up the activation function with sigmoid
  • Creating the sigmoid derivative function
  • Calculating the cost function in a neural network
  • Predicting gender based on height and weight
  • Visualizing prediction scores

Introduction

Much of this book will focus on building deep learning algorithms with libraries in Python, such as TensorFlow and Keras. While these libraries are helpful to build deep neural networks without getting deep into the calculus and linear algebra of deep learning, this chapter will do a deep dive into building a simple neural network in PySpark to make a gender prediction based on height and weight. One of the best ways to understand the foundation of neural networks is to build a model from scratch, without any of the popular deep learning libraries. Once the foundation for a neural network framework is established, understanding and utilizing some of the more popular deep neural network libraries will become much simpler.

Creating a dataframe in PySpark

dataframes will serve as the framework for any and all data that will be used in building deep learning models. Similar to the pandas library with Python, PySpark has its own built-in functionality to create a dataframe.

Getting ready

There are several ways to create a dataframe in Spark. One common way is by importing a .txt, .csv, or .json file. Another method is to manually enter fields and rows of data into the PySpark dataframe, and while the process can be a bit tedious, it is helpful, especially when dealing with a small dataset. To predict gender based on height and weight, this chapter will build a dataframe manually in PySpark. The dataset used is as follows:

While the dataset...

Manipulating columns in a PySpark dataframe

The dataframe is almost complete; however, there is one issue that requires addressing before building the neural network. Rather than keeping the gender value as a string, it is better to convert the value to a numeric integer for calculation purposes, which will become more evident as this chapter progresses.

Getting ready

This section will require  importing the following:

  • from pyspark.sql import functions

How to do it...

This section walks through the steps for the string conversion to a numeric value...

Converting a PySpark dataframe to an array

In order to form the building blocks of the neural network, the PySpark dataframe must be converted into an array. Python has a very powerful library, numpy, that makes working with arrays simple.

Getting ready

The numpy library should be already available with the installation of the anaconda3 Python package. However, if for some reason the numpy library is not available, it can be installed using the following command at the terminal:

pip install or sudo pip install will confirm whether the requirements are already satisfied by using the requested library:

import numpy as np
...

Visualizing an array in a scatterplot

The goal of the neural network that will be developed in this chapter is to predict the gender of an individual if the height and weight are known. A powerful method for understanding the relationship between height, weight, and gender is by visualizing the data points feeding the neural network. This can be done with the popular Python visualization library matplotlib.

Getting ready

As was the case with numpy, matplotlib should be available with the installation of the anaconda3 Python package. However, if for some reason matplotlib is not available, it can be installed using the following command at the terminal:

pip install or sudo pip install will confirm...

Setting up weights and biases for input into the neural network

The framework in PySpark and the data are now complete. It is time to move on to building the neural network. Regardless of the complexity of the neural network, the development follows a similar path:

  1. Input data
  2. Add the weights and biases
  3. Sum the product of the data and weights
  1. Apply an activation function
  2. Evaluate the output and compare it to the desired outcome

This section will focus on setting the weights that create the input which feeds into the activation function.

Getting ready

A cursory understanding of the building blocks of a simple neural network is helpful in understanding this section and the rest of the chapter.  Each neural network has...

Normalizing the input data for the neural network

Neural networks work more efficiently when the inputs are normalized. This minimizes the magnitude of a particular input affecting the overall outcome over other potential inputs that have lower values of magnitude. This section will normalize the height and weight inputs of the current individuals.

Getting ready

The normalization of input values requires obtaining the mean and standard deviation of those values for the final calculation.

How to do it...

This section walks through the steps to normalize the height and weight...

Validating array for optimal neural network performance

A little bit of validation goes a long way in ensuring that our array is normalized for optimal performance within our upcoming neural network.  

Getting ready

This section will require a bit of numpy magic using the numpy.stack() function.

How to do it...

The following steps walk through validating that our array has been normalized.

  1. Execute the following step to print the mean and standard deviation of array inputs:
print('standard deviation')
print(round(X[:,0].std(axis=0),0))
print('mean...

Introduction


Much of this book will focus on building deep learning algorithms with libraries in Python, such as TensorFlow and Keras. While these libraries are helpful to build deep neural networks without getting deep into the calculus and linear algebra of deep learning, this chapter will do a deep dive into building a simple neural network in PySpark to make a gender prediction based on height and weight. One of the best ways to understand the foundation of neural networks is to build a model from scratch, without any of the popular deep learning libraries. Once the foundation for a neural network framework is established, understanding and utilizing some of the more popular deep neural network libraries will become much simpler.

Creating a dataframe in PySpark


dataframes will serve as the framework for any and all data that will be used in building deep learning models. Similar to the pandas library with Python, PySpark has its own built-in functionality to create a dataframe.

Getting ready

There are several ways to create a dataframe in Spark. One common way is by importing a .txt, .csv, or .json file. Another method is to manually enter fields and rows of data into the PySpark dataframe, and while the process can be a bit tedious, it is helpful, especially when dealing with a small dataset. To predict gender based on height and weight, this chapter will build a dataframe manually in PySpark. The dataset used is as follows:

While the dataset will be manually added to PySpark in this chapter, the dataset can also be viewed and downloaded from the following link:

https://github.com/asherif844/ApacheSparkDeepLearningCookbook/blob/master/CH02/data/HeightAndWeight.txt

Finally, we will begin this chapter and future chapters...

Manipulating columns in a PySpark dataframe


The dataframe is almost complete; however, there is one issue that requires addressing before building the neural network. Rather than keeping the gender value as a string, it is better to convert the value to a numeric integer for calculation purposes, which will become more evident as this chapter progresses.

Getting ready

This section will require  importing the following:

  • from pyspark.sql import functions

How to do it...

This section walks through the steps for the string conversion to a numeric value in the dataframe:

  • Female --> 0 
  • Male --> 1
  1. Convert a column value inside of a dataframe requires importing functions:
from pyspark.sql import functions
  1. Next, modify the gender column to a numeric value using the following script:
df = df.withColumn('gender',functions.when(df['gender']=='Female',0).otherwise(1))
  1. Finally, reorder the columns so that gender is the last column in the dataframe using the following script:
df = df.select('height', 'weight...

Converting a PySpark dataframe to an array


In order to form the building blocks of the neural network, the PySpark dataframe must be converted into an array. Python has a very powerful library, numpy, that makes working with arrays simple.

Getting ready

The numpy library should be already available with the installation of the anaconda3 Python package. However, if for some reason the numpy library is not available, it can be installed using the following command at the terminal:

pip install or sudo pip install will confirm whether the requirements are already satisfied by using the requested library:

import numpy as np

How to do it...

This section walks through the steps to convert the dataframe into an array:

  1. View the data collected from the dataframe using the following script:
df.select("height", "weight", "gender").collect()
  1. Store the values from the collection into an array called data_array using the following script:
data_array =  np.array(df.select("height", "weight", "gender").collect())
  1. Execute...

Visualizing an array in a scatterplot


The goal of the neural network that will be developed in this chapter is to predict the gender of an individual if the height and weight are known. A powerful method for understanding the relationship between height, weight, and gender is by visualizing the data points feeding the neural network. This can be done with the popular Python visualization library matplotlib.

Getting ready

As was the case with numpy, matplotlib should be available with the installation of the anaconda3 Python package. However, if for some reasonmatplotlib is not available, it can be installed using the following command at the terminal:

pip install orsudo pip install will confirm the requirements are already satisfied by using the requested library.

How to do it...

This section walks through the steps to visualize an array through a scatterplot.

  1. Import the matplotlib library and configure the library to visualize plots inside of the Jupyter notebook using the following script:
 import...
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Train distributed complex neural networks on Apache Spark
  • Use TensorFlow and Keras to train and deploy deep learning models
  • Explore practical tips to enhance performance

Description

Organizations these days need to integrate popular big data tools such as Apache Spark with highly efficient deep learning libraries if they’re looking to gain faster and more powerful insights from their data. With this book, you’ll discover over 80 recipes to help you train fast, enterprise-grade, deep learning models on Apache Spark. Each recipe addresses a specific problem, and offers a proven, best-practice solution to difficulties encountered while implementing various deep learning algorithms in a distributed environment. The book follows a systematic approach, featuring a balance of theory and tips with best practice solutions to assist you with training different types of neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). You’ll also have access to code written in TensorFlow and Keras that you can run on Spark to solve a variety of deep learning problems in computer vision and natural language processing (NLP), or tweak to tackle other problems encountered in deep learning. By the end of this book, you'll have the skills you need to train and deploy state-of-the-art deep learning models on Apache Spark.

Who is this book for?

If you’re looking for a practical resource for implementing efficiently distributed deep learning models with Apache Spark, then this book is for you. Knowledge of core machine learning concepts and a basic understanding of the Apache Spark framework is required to get the most out of this book. Some knowledge of Python programming will also be useful.

What you will learn

  • Set up a fully functional Spark environment
  • Understand practical machine learning and deep learning concepts
  • Employ built-in machine learning libraries within Spark
  • Discover libraries that are compatible with TensorFlow and Keras
  • Explore NLP models such as word2vec and TF-IDF on Spark
  • Organize DataFrames for deep learning evaluation
  • Apply testing and training modeling to ensure accuracy
  • Access readily available code that can be reused

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 13, 2018
Length: 474 pages
Edition : 1st
Language : English
ISBN-13 : 9781788474221
Category :
Languages :
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jul 13, 2018
Length: 474 pages
Edition : 1st
Language : English
ISBN-13 : 9781788474221
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 158.97
Java Deep Learning Projects
$54.99
Apache Spark Deep Learning Cookbook
$54.99
Hands-On Deep Learning with Apache Spark
$48.99
Total $ 158.97 Stars icon
Banner background image

Table of Contents

14 Chapters
Setting Up Spark for Deep Learning Development Chevron down icon Chevron up icon
Creating a Neural Network in Spark Chevron down icon Chevron up icon
Pain Points of Convolutional Neural Networks Chevron down icon Chevron up icon
Pain Points of Recurrent Neural Networks Chevron down icon Chevron up icon
Predicting Fire Department Calls with Spark ML Chevron down icon Chevron up icon
Using LSTMs in Generative Networks Chevron down icon Chevron up icon
Natural Language Processing with TF-IDF Chevron down icon Chevron up icon
Real Estate Value Prediction Using XGBoost Chevron down icon Chevron up icon
Predicting Apple Stock Market Cost with LSTM Chevron down icon Chevron up icon
Face Recognition Using Deep Convolutional Networks Chevron down icon Chevron up icon
Creating and Visualizing Word Vectors Using Word2Vec Chevron down icon Chevron up icon
Creating a Movie Recommendation Engine with Keras Chevron down icon Chevron up icon
Image Classification with TensorFlow on Spark Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Half star icon Empty star icon Empty star icon Empty star icon 1.7
(6 Ratings)
5 star 16.7%
4 star 0%
3 star 0%
2 star 0%
1 star 83.3%
Filter icon Filter
Top Reviews

Filter reviews by




Adnan Masood, PhD Aug 16, 2018
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The tremendous impact of Artificial Intelligence and Machine learning, and the uncanny effectiveness of deep neural networks are hard to escape in both academia and industry. Meanwhile implementation grade material outlining the deep learning using Spark are not always easy to find. The manuscript you are holding in your hands (or in your e-reader) is an problem-solution oriented approach which not only shows Spark’s capabilities but also the art of possible around various machine learning and deep learning problems.Full disclosure, I am the technical reviewer of the book and wrote the foreword. It was a pleasure reading and reviewing the Ahmed Sherif and Amrith Ravindra’s work which I hope you as a reader will also find very compelling.The authors begin with helping to set up Spark for Deep Learning development by providing clear and concisely written recipes. The initial setup is naturally followed by creating a neural network, elaborating on pain points of Convolutional Neural Networks, and Recurrent Neural Networks. Later on, authors provided practical (yet simplified) use cases of predicting fire department calls with SparkML, real estate value prediction using XGBoost, predicting the stock market cost of Apple with LSTM, and creating a movie recommendation Engine with Keras. The book covers pertinent and highly relevant technologies with operational use cases like LSTMs in Generative Networks, natural language processing with TF-IDF, face recognition using deep convolutional networks, creating and visualizing word vectors using Word2Vec and image classification with TensorFlow on Spark. Beside crisp and focused writing, this wide array of highly relevant machine learning and deep learning topics give the book its core strength.I hope this book will help you leverage Apache Spark to tackle business and technology problems. Highly recommended reading.
Amazon Verified review Amazon
Todd Sep 06, 2023
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
$55 bucks… not worth it. There’s this website called google
Amazon Verified review Amazon
Bethany Sep 06, 2023
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
An absolutely disgrace from cover to cover. I’d rather eat the book than use the recipes.
Amazon Verified review Amazon
Amanda Sep 02, 2023
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Wouldn’t recommend
Amazon Verified review Amazon
Patrick Sullivan Sep 06, 2023
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Total Garbage. Not worth it
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.