Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
The Unsupervised Learning Workshop
The Unsupervised Learning Workshop

The Unsupervised Learning Workshop: Get started with unsupervised learning algorithms and simplify your unorganized data to help make future predictions

Arrow left icon
Profile Icon Aaron Jones Profile Icon Christopher Kruger Profile Icon Benjamin Johnston
Arrow right icon
€17.99 €26.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
eBook Jul 2020 550 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Aaron Jones Profile Icon Christopher Kruger Profile Icon Benjamin Johnston
Arrow right icon
€17.99 €26.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (6 Ratings)
eBook Jul 2020 550 pages 1st Edition
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€17.99 €26.99
Paperback
€32.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

The Unsupervised Learning Workshop

2. Hierarchical Clustering

Overview

In this chapter, we will implement the hierarchical clustering algorithm from scratch using common Python packages and perform agglomerative clustering. We will also compare k-means with hierarchical clustering. We will use hierarchical clustering to build stronger groupings that make more logical sense. By the end of this chapter, we will be able to use hierarchical clustering to build stronger groupings that make more logical sense.

Introduction

In this chapter, we will expand on the basic ideas that we built in Chapter 1, Introduction to Clustering, by surrounding clustering with the concept of similarity. Once again, we will be implementing forms of the Euclidean distance to capture the notion of similarity. It is important to bear in mind that the Euclidean distance just happens to be one of the most popular distance metrics; it's not the only one. Through these distance metrics, we will expand on the simple neighbor calculations that we explored in the previous chapter by introducing the concept of hierarchy. By using hierarchy to convey clustering information, we can build stronger groupings that make more logical sense. Similar to k-means, hierarchical clustering can be helpful for cases such as customer segmentation or identifying similar product types. However, there is a slight benefit in being able to explain things in a clearer fashion with hierarchical clustering. In this chapter, we will outline...

Clustering Refresher

Chapter 1, Introduction to Clustering, covered both the high-level concepts and in-depth details of one of the most basic clustering algorithms: k-means. While it is indeed a simple approach, do not discredit it; it will be a valuable addition to your toolkit as you continue your exploration of the unsupervised learning world. In many real-world use cases, companies experience valuable discoveries through the simplest methods, such as k-means or linear regression (for supervised learning). An example of this is evaluating a large selection of customer data – if you were to evaluate it directly in a table, it would be unlikely that you'd find anything helpful. However, even a simple clustering algorithm can identify where groups within the data are similar and dissimilar. As a refresher, let's quickly walk through what clusters are and how k-means works to find them:

Figure 2.1: The attributes that separate supervised and unsupervised...

The Organization of the Hierarchy

Both the natural and human-made world contain many examples of organizing systems into hierarchies and why, for the most part, it makes a lot of sense. A common representation that is developed from these hierarchies can be seen in tree-based data structures. Imagine that you have a parent node with any number of child nodes that can subsequently be parent nodes themselves. By organizing information into a tree structure, you can build an information-dense diagram that clearly shows how things are related to their peers and their larger abstract concepts.

An example from the natural world to help illustrate this concept can be seen in how we view the hierarchy of animals, which goes from parent classes to individual species:

Figure 2.2: The relationships of animal species in a hierarchical tree structure

In the preceding diagram, you can see an example of how relational information between varieties of animals can be...

Introduction to Hierarchical Clustering

So far, we have shown you that hierarchies can be excellent structures to organize information that clearly shows nested relationships among data points. While this helps us gain an understanding of the parent/child relationships between items, it can also be very handy when forming clusters. Expanding on the animal example in the previous section, imagine that you were simply presented with two features of animals: their height (measured from the tip of the nose to the end of the tail) and their weight. Using this information, you then have to recreate a hierarchical structure in order to identify which records in your dataset correspond to dogs and cats, as well as their relative subspecies.

Since you are only given animal heights and weights, you won't be able to deduce the specific names of each species. However, by analyzing the features that you have been provided with, you can develop a structure within the data that serves as...

Linkage

In Exercise 2.01, Building a Hierarchy, you implemented hierarchical clustering using what is known as Centroid Linkage. Linkage is the concept of determining how you can calculate the distances between clusters and is dependent on the type of problem you are facing. Centroid linkage was chosen for Exercise 2.02, Applying Linkage Criteria, as it essentially mirrors the new centroid search that we used in k-means. However, this is not the only option when it comes to clustering data points. Two other popular choices for determining distances between clusters are single linkage and complete linkage.

Single Linkage works by finding the minimum distance between a pair of points between two clusters as its criteria for linkage. Simply put, it essentially works by combining clusters based on the closest points between the two clusters. This is expressed mathematically as follows:

dist(a,b) = min( dist( a[i]), b[j] ) )

In the preceding code, a[i] is the ith point within...

Agglomerative versus Divisive Clustering

So far, our instances of hierarchical clustering have all been agglomerative – that is, they have been built from the bottom up. While this is typically the most common approach for this type of clustering, it is important to know that it is not the only way a hierarchy can be created. The opposite hierarchical approach, that is, built from the top up, can also be used to create your taxonomy. This approach is called divisive hierarchical clustering and works by having all the data points in your dataset in one massive cluster. Many of the internal mechanics of the divisive approach will prove to be quite similar to the agglomerative approach:

Figure 2.20: Agglomerative versus divisive hierarchical clustering

As with most problems in unsupervised learning, deciding on the best approach is often highly dependent on the problem you are faced with solving.

Imagine that you are an entrepreneur who has just bought...

k-means versus Hierarchical Clustering

In the previous chapter, we explored the merits of k-means clustering. Now, it is important to explore where hierarchical clustering fits into the picture. As we mentioned in the Linkage section, there is some potential direct overlap when it comes to grouping data points together using centroids. Universal to all of the approaches we've mentioned so far is the use of a distance function to determine similarity. Due to our in-depth exploration in the previous chapter, we used the Euclidean distance here, but we understand that any distance function can be used to determine similarities.

In practice, here are some quick highlights for choosing one clustering method over another:

  • Hierarchical clustering benefits from not needing to pass in an explicit "k" number of clusters a priori. This means that you can find all the potential clusters and decide which clusters make the most sense after the algorithm has completed...

Summary

In this chapter, we discussed how hierarchical clustering works and where it may be best employed. In particular, we discussed various aspects of how clusters can be subjectively chosen through the evaluation of a dendrogram plot. This is a huge advantage over k-means clustering if you have absolutely no idea of what you're looking for in the data. Two key parameters that drive the success of hierarchical clustering were also discussed: the agglomerative versus divisive approach and linkage criteria. Agglomerative clustering takes a bottom-up approach by recursively grouping nearby data together until it results in one large cluster. Divisive clustering takes a top-down approach by starting with the one large cluster and recursively breaking it down until each data point falls into its own cluster. Divisive clustering has the potential to be more accurate since it has a complete view of the data from the start; however, it adds a layer of complexity that can decrease the...

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Get familiar with the ecosystem of unsupervised algorithms
  • Learn interesting methods to simplify large amounts of unorganized data
  • Tackle real-world challenges, such as estimating the population density of a geographical area

Description

Do you find it difficult to understand how popular companies like WhatsApp and Amazon find valuable insights from large amounts of unorganized data? The Unsupervised Learning Workshop will give you the confidence to deal with cluttered and unlabeled datasets, using unsupervised algorithms in an easy and interactive manner. The book starts by introducing the most popular clustering algorithms of unsupervised learning. You'll find out how hierarchical clustering differs from k-means, along with understanding how to apply DBSCAN to highly complex and noisy data. Moving ahead, you'll use autoencoders for efficient data encoding. As you progress, you’ll use t-SNE models to extract high-dimensional information into a lower dimension for better visualization, in addition to working with topic modeling for implementing natural language processing (NLP). In later chapters, you’ll find key relationships between customers and businesses using Market Basket Analysis, before going on to use Hotspot Analysis for estimating the population density of an area. By the end of this book, you’ll be equipped with the skills you need to apply unsupervised algorithms on cluttered datasets to find useful patterns and insights.

Who is this book for?

If you are a data scientist who is just getting started and want to learn how to implement machine learning algorithms to build predictive models, then this book is for you. To expedite the learning process, a solid understanding of the Python programming language is recommended, as you’ll be editing classes and functions instead of creating them from scratch.

What you will learn

  • Distinguish between hierarchical clustering and the k-means algorithm
  • Understand the process of finding clusters in data
  • Grasp interesting techniques to reduce the size of data
  • Use autoencoders to decode data
  • Extract text from a large collection of documents using topic modeling
  • Create a bag-of-words model using the CountVectorizer

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 29, 2020
Length: 550 pages
Edition : 1st
Language : English
ISBN-13 : 9781800206243
Category :
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Jul 29, 2020
Length: 550 pages
Edition : 1st
Language : English
ISBN-13 : 9781800206243
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 95.97
The Unsupervised Learning Workshop
€32.99
The Deep Learning Workshop
€32.99
The Supervised Learning Workshop
€29.99
Total 95.97 Stars icon
Banner background image

Table of Contents

9 Chapters
1. Introduction to Clustering Chevron down icon Chevron up icon
2. Hierarchical Clustering Chevron down icon Chevron up icon
3. Neighborhood Approaches and DBSCAN Chevron down icon Chevron up icon
4. Dimensionality Reduction Techniques and PCA Chevron down icon Chevron up icon
5. Autoencoders Chevron down icon Chevron up icon
6. t-Distributed Stochastic Neighbor Embedding Chevron down icon Chevron up icon
7. Topic Modeling Chevron down icon Chevron up icon
8. Market Basket Analysis Chevron down icon Chevron up icon
9. Hotspot Analysis Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(6 Ratings)
5 star 33.3%
4 star 66.7%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Marleen Feb 19, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The books starts basically right away with the first main topic: Clustering. Only a few pages of Supervised vs Unsupervised Learning serve as introduction. However, often times I find myself scanning through 3 chapters or so of intro or repetition before a book starts with what it actually is about, therefore this is a welcoming change and meant to be a positive feedback.Unsupervised Learning is known to be more difficult to implement and also to explain to e.g. Management, Audit etc. This book can definitely be used as a guide to understand various areas of unsupervised learning and within each area you will learn multiple methods that one could use as examples at work to explain a concept.I am fairly new to unsupervised learning and tend to use supervised learning as much as possible but this book definitely gives me a good base to try new things.I can see that for more advanced users this book might be limited and sometimes to high level, but for me this book has just the right depths. I also enjoyed the accompanying material and I liked that the books tried to put images wherever possible. Therefore I would rate it - for my use-cases - with 5 stars.
Amazon Verified review Amazon
ShivamPandey Feb 23, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an amazing book.Unsupervised Learning with Python contains comprehensive coverage of the mathematical foundations, algorithms, and practical implementations of unsupervised learning. This book provides machine learning approach to uncover patterns and trends in your data, and support sound strategic decisions for your business.The book bridges the gap between complex math and practical Python implementations. It involves Fundamental building blocks and concepts of unsupervised learning,Choosing the right algorithm for your problem & How to interpret the results of unsupervised learning.Market Basket Analysis & HotSpot Analysis were the stand outs for me. The concepts are explained in an easy to go smooth flowing pattern.I would love if the book could also provide hands on with code via video excercises. I would rate this book a 5 star & must read for individuals exploring the space of Unsupervised Machine Learning. Special call out to the author & publishers of the book. Thank you.
Amazon Verified review Amazon
Julian M. - DS and ML Advisor | MSc. [c] Feb 16, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The book is written with a simple vocabulary, combining technical topics with accessible content for those inexperienced in Machine Learning. The examples are theoretical-practical and always written in an understandable and easily digestible format. When reading the algorithmic details, a continuous and coherent narration of the explained techniques is presented, always maintaining the general context of unsupervised learning.The theoretical framework begins with the explanation of well-known clustering techniques such as K-means, DBSCAN, and Hierarchical clustering. The deepening of the description of K-means allows extrapolating the theoretical aspects to other more complex procedures. Consecutively, dimensionality reduction techniques are detailed, emphasizing Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE) as a visualization algorithm. Decomposition by eigenvector and eigenvalues is explained through Singular Value Decomposition. The Autoencoders topic explained after PCA has an understandable introduction about several activation functions and perceptron models, explaining Artificial Neural Networks (ANN) and Convolutional Neural Networks (CNN) in simple words with a theoretical grasp. Finally, the content is oriented towards (a) some components of natural language processing as an application example for techniques such as Topic Modeling, with emphasis on Latent Dirichlet Allocation (LDA); and (b) some components of inferential statistics such as the estimation of data distributions through Kernel Density Estimation and the use of Kernel functions for linear regression, among others.From a practical point of view, a notable aspect is the writers' ability to explain algorithmic methods from scratch and using Python libraries. Each technique is detailed with its practical component in Python, allowing to practice while the reader is learning the theory.My major concern is regarding the depth of the content. It is a book that allows one to grasp the general aspects of unsupervised learning as a workshop, as the title mentions. However, some Deep Learning topics, such as Convolutional Neural Networks, are not explained sufficiently rigorously. The interpretability and explainability of such models are partially ignored. If you are looking in the book for the depth of theory and open implementations, this book is not the place to be. Conversely, if looking for a tool that allows you to learn about applied unsupervised learning algorithms, implementing known techniques, and using existing libraries, the book is the right place for your learning. Although some code is incomplete for the sake of understanding and flow of the book, it can be found in the Github repository in more detail, which is referenced throughout the book.Overall, I enjoyed reading the book and highly recommend it. It covers a wide variety of unsupervised learning techniques. The balance between theory and practice is right for those looking to get a grasp and general overview of these Machine Learning topics.
Amazon Verified review Amazon
Mr Critical Feb 14, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
I have given this book a thorough read and I have also read several other books on Data Science. One of the interesting aspects of this book is that it helps the reader understand the concept through code snippets. The book is built with good examples and robust code which means anyone can start mimicking the examples given and understand what is happening. For me personally its always best when I have some form of code for each concept.Another thing that I liked about the book is the structure given for learning. The book explains very well about unsupervised learning concepts such as Kmeans vs DBSCAN vs Heirachical clustering. Good book to have on your shelf incase you are getting into Data Science.
Amazon Verified review Amazon
Darpan Jan 27, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Pros:This book is written in etiquette and very attractive.Details and enough information about the methods used in unsupervised learning.This is the first time I am seeing you implement jupyter style code in each section and also output which is quite amazing.Every data scientist would like to know how this algorithm would help in real time applications where the actual data would come.I am glad that the author mentioned it.It's so amazing that visualization is also in the book which helps to understand algorithms statistically.Cons :One thing is it's length as I am reading the book chapter by chapter. It is becoming lengthy.I am not sure but every beginner has to learn each and every algorithm from scratch in terms of coding. If a person is not into the coding field, they would directly implement the library without understanding of how parameters are used.Less information about training and optimization.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.