Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Apache Spark Machine Learning Blueprints
Apache Spark Machine Learning Blueprints

Apache Spark Machine Learning Blueprints: Develop a range of cutting-edge machine learning projects with Apache Spark using this actionable guide

eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Apache Spark Machine Learning Blueprints

Chapter 2. Data Preparation for Spark ML

Machine learning professionals and data scientists often spend 70% or 80% of their time preparing data for their machine learning projects. Data preparation can be very hard work, but it is necessary and extremely important as it affects everything to follow. Therefore, in this chapter, we will cover all the necessary data preparation parts for our machine learning, which often runs from data accessing, data cleaning, datasets joining, and then to feature development so as to get our datasets ready to develop ML models on Spark. Specifically, we will discuss the following six data preparation tasks mentioned before and then end our chapter with a discussion of repeatability and automation:

  • Accessing and loading datasets
    • Publicly available datasets for ML
    • Loading datasets into Spark easily
    • Exploring and visualizing data with Spark
  • Data cleaning
    • Dealing with missing cases and incompleteness
    • Data cleaning on Spark
    • Data cleaning made easy
  • Identity matching...

Accessing and loading datasets

In this section, we will review some publicly available datasets and cover methods of loading some of these datasets into Spark. Then, we will review several methods of exploring and visualizing these datasets on Spark.

After this section, we will be able to find some datasets to use, load them into Spark, and then start to explore and visualize this data.

Accessing publicly available datasets

As there is an open source movement to make software free, there is also a very active open data movement that made a lot of datasets freely accessible to every researcher and analyst. At a worldwide scale, most governments make their collected datasets open to the public. For example, on http://www.data.gov/, there are more than 140,000 datasets available to be used freely, which are spread over agriculture, finance, and education.

Besides open data coming from various governmental organizations, many research institutions also collect a lot of very useful datasets and...

Data cleaning

In this section, we will review some methods for data cleaning on Spark with a focus on data incompleteness. Then, we will discuss some of Spark's special features for data cleaning and also some data cleaning solutions made easy with Spark.

After this section, we will be able to clean data and make datasets ready for machine learning.

Dealing with data incompleteness

For machine learning, the more the data the better. However, as is often the case, the more the data, the dirtier it could be—that is, the more the work to clean the data.

There are many issues to deal with data quality control, which can be as simple as data entry errors or data duplications. In principal, the methods of treating them are similar—for example, utilizing data logic for discovery and subject matter knowledge and analytical logic to correct them. For this reason, in this section, we will focus on missing value treatment so as to illustrate our usage of Spark for this topic. Data cleaning...

Identity matching

In this section, we will cover one important data preparation topic, which is about identity matching and related solutions. We will discuss some of Spark's special features for solving identity issues and also some data matching solutions made easy with Spark.

After this section, we will be capable of taking care of some common data identity problems with Apache Spark.

Identity issues

For data preparation, we often need to deal with some data elements that belong to the same person or units, but which do not look similar to them. For example, we may have purchased some data for customer Larry Z. and web activity data for L. Zhang. Is Larry Z a same person as L. Zhang? Are there many identity variations in the data?

Matching entities is a big challenge for machine learning data preparation as these types of entity variation are very common and could be caused by many different reasons, such as duplications, errors, name variants, and intentional aliasing. Sometimes, it...

Dataset reorganizing

In this section, we will cover dataset reorganization techniques. Then, we will discuss some of Spark's special features for data reorganizing and also some of R's special methods for data reorganizing that can be used with the Spark notebook.

After this section, we will be able to reorganize datasets for various machine learning needs.

Dataset reorganizing tasks

Reorganizing datasets sounds easy but could be very challenging and also often very time consuming.

Two common data reorganizing tasks are—firstly, to obtain a subset of the data for modeling and, secondly, to aggregate data to a higher level. For example, we have students' data, but we need to have a dataset at the classroom level. For this, we will need to calculate some attributes for students and then reorganize it into new data.

To work with data reorganizing, data scientists and machine learning professionals often utilize their familiar SQL or R programming tools. Fortunately within...

Dataset joining

In this section, we will cover dataset joining techniques. We will also discuss some of Spark's special features for data joining plus some data joining solutions made easy with Spark.

After this section, we will be able to join data for various machine learning needs.

Dataset joining and its tool – the Spark SQL

In preparing datasets for a machine learning project, we often need to combine data from multiple datasets. For relational tables, the task is to join tables through a primary and foreign key relationship.

Joining two or more datasets together sounds easy, but can be very challenging and time consuming. In SQL, SELECT is the most frequently used command. As an example, the following is a typical SQL code to perform a join:

SELECT column1, column2, …
FROM table1, table2
WHERE table1.joincolumn = table2.joincolumn
AND search_condition(s);

To work with the table joining tasks mentioned before, data scientists and machine learning professionals often utilize...

Accessing and loading datasets


In this section, we will review some publicly available datasets and cover methods of loading some of these datasets into Spark. Then, we will review several methods of exploring and visualizing these datasets on Spark.

After this section, we will be able to find some datasets to use, load them into Spark, and then start to explore and visualize this data.

Accessing publicly available datasets

As there is an open source movement to make software free, there is also a very active open data movement that made a lot of datasets freely accessible to every researcher and analyst. At a worldwide scale, most governments make their collected datasets open to the public. For example, on http://www.data.gov/, there are more than 140,000 datasets available to be used freely, which are spread over agriculture, finance, and education.

Besides open data coming from various governmental organizations, many research institutions also collect a lot of very useful datasets and make...

Data cleaning


In this section, we will review some methods for data cleaning on Spark with a focus on data incompleteness. Then, we will discuss some of Spark's special features for data cleaning and also some data cleaning solutions made easy with Spark.

After this section, we will be able to clean data and make datasets ready for machine learning.

Dealing with data incompleteness

For machine learning, the more the data the better. However, as is often the case, the more the data, the dirtier it could be—that is, the more the work to clean the data.

There are many issues to deal with data quality control, which can be as simple as data entry errors or data duplications. In principal, the methods of treating them are similar—for example, utilizing data logic for discovery and subject matter knowledge and analytical logic to correct them. For this reason, in this section, we will focus on missing value treatment so as to illustrate our usage of Spark for this topic. Data cleaning covers data...

Identity matching


In this section, we will cover one important data preparation topic, which is about identity matching and related solutions. We will discuss some of Spark's special features for solving identity issues and also some data matching solutions made easy with Spark.

After this section, we will be capable of taking care of some common data identity problems with Apache Spark.

Identity issues

For data preparation, we often need to deal with some data elements that belong to the same person or units, but which do not look similar to them. For example, we may have purchased some data for customer Larry Z. and web activity data for L. Zhang. Is Larry Z a same person as L. Zhang? Are there many identity variations in the data?

Matching entities is a big challenge for machine learning data preparation as these types of entity variation are very common and could be caused by many different reasons, such as duplications, errors, name variants, and intentional aliasing. Sometimes, it could...

Left arrow icon Right arrow icon

Key benefits

  • Customize Apache Spark and R to fit your analytical needs in customer research, fraud detection, risk analytics, and recommendation engine development
  • Develop a set of practical Machine Learning applications that can be implemented in real-life projects
  • A comprehensive, project-based guide to improve and refine your predictive models for practical implementation

Description

There's a reason why Apache Spark has become one of the most popular tools in Machine Learning – its ability to handle huge datasets at an impressive speed means you can be much more responsive to the data at your disposal. This book shows you Spark at its very best, demonstrating how to connect it with R and unlock maximum value not only from the tool but also from your data. Packed with a range of project "blueprints" that demonstrate some of the most interesting challenges that Spark can help you tackle, you'll find out how to use Spark notebooks and access, clean, and join different datasets before putting your knowledge into practice with some real-world projects, in which you will see how Spark Machine Learning can help you with everything from fraud detection to analyzing customer attrition. You'll also find out how to build a recommendation engine using Spark's parallel computing powers.

Who is this book for?

If you are a data scientist, a data analyst, or an R and SPSS user with a good understanding of machine learning concepts, algorithms, and techniques, then this is the book for you. Some basic understanding of Spark and its core elements and application is required.

What you will learn

  • Set up Apache Spark for machine learning and discover its impressive processing power
  • Combine Spark and R to unlock detailed business insights essential for decision making
  • Build machine learning systems with Spark that can detect fraud and analyze financial risks
  • Build predictive models focusing on customer scoring and service ranking
  • Build a recommendation systems using SPSS on Apache
  • Spark
  • Tackle parallel computing and find out how it can support your machine learning projects
  • Turn open data and communication data into actionable insights by making use of various forms of machine learning

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 30, 2016
Length: 252 pages
Edition : 1st
Language : English
ISBN-13 : 9781785880391
Vendor :
Apache
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : May 30, 2016
Length: 252 pages
Edition : 1st
Language : English
ISBN-13 : 9781785880391
Vendor :
Apache
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 136.97
Fast Data Processing with Spark 2
$43.99
Apache Spark Machine Learning Blueprints
$43.99
Real-Time Big Data Analytics
$48.99
Total $ 136.97 Stars icon
Banner background image

Table of Contents

12 Chapters
1. Spark for Machine Learning Chevron down icon Chevron up icon
2. Data Preparation for Spark ML Chevron down icon Chevron up icon
3. A Holistic View on Spark Chevron down icon Chevron up icon
4. Fraud Detection on Spark Chevron down icon Chevron up icon
5. Risk Scoring on Spark Chevron down icon Chevron up icon
6. Churn Prediction on Spark Chevron down icon Chevron up icon
7. Recommendations on Spark Chevron down icon Chevron up icon
8. Learning Analytics on Spark Chevron down icon Chevron up icon
9. City Analytics on Spark Chevron down icon Chevron up icon
10. Learning Telco Data on Spark Chevron down icon Chevron up icon
11. Modeling Open Data on Spark Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
(1 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 100%
Sven Feb 17, 2017
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
Vous trouverez beaucoup mieux directement sur la documentation de Spark ou sur google. Ce livre n'apporte aucune plus value, ni dans la forme, ni dans la structure, ni dans le contenu, ni dans les use cases, ni dans son originalité,...
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.