Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Java: Data Science Made Easy

You're reading from   Java: Data Science Made Easy Data collection, processing, analysis, and more

Arrow left icon
Product type Course
Published in Jul 2017
Publisher Packt
ISBN-13 9781788475655
Length 734 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Alexey Grigorev Alexey Grigorev
Author Profile Icon Alexey Grigorev
Alexey Grigorev
Richard M. Reese Richard M. Reese
Author Profile Icon Richard M. Reese
Richard M. Reese
Jennifer L. Reese Jennifer L. Reese
Author Profile Icon Jennifer L. Reese
Jennifer L. Reese
Arrow right icon
View More author details
Toc

Table of Contents (29) Chapters Close

Title Page
Credits
Preface
1. Module 1
2. Getting Started with Data Science FREE CHAPTER 3. Data Acquisition 4. Data Cleaning 5. Data Visualization 6. Statistical Data Analysis Techniques 7. Machine Learning 8. Neural Networks 9. Deep Learning 10. Text Analysis 11. Visual and Audio Analysis 12. Visual and Audio Analysis 13. Mathematical and Parallel Techniques for Data Analysis 14. Bringing It All Together 15. Module 2
16. Data Science Using Java 17. Data Processing Toolbox 18. Exploratory Data Analysis 19. Supervised Learning - Classification and Regression 20. Unsupervised Learning - Clustering and Dimensionality Reduction 21. Working with Text - Natural Language Processing and Information Retrieval 22. Extreme Gradient Boosting 23. Deep Learning with DeepLearning4J 24. Scaling Data Science 25. Deploying Data Science Models 26. Bibliography

Apache Spark


Apache Spark is a framework for scalable data processing. It was designed to be better than Hadoop: it tries to process data in memory and not to save intermediate results on disk. Additionally, it has more operations, not just map and reduce, and thus richer APIs.

The main unit of abstraction in Apache Spark is Resilient Distributed Dataset (RDD), which is a distributed collection of elements. The key difference from usual collections or streams is that RDDs can be processed in parallel across multiple machines, in the same way, Hadoop jobs are processed. 

There are two types of operations we can apply to RDDs: transformations and actions.

  • Transformations: As the name suggests, it only changes data from one form to another. As input, they receive an RDD, and they also output an RDD. Operations such as map, flatMap, or filter are examples of transformation operations.
  • Actions: These take in an RDD and produce something else, for example, a value, a list, or a map, or save the...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image