Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow Developer Certificate Guide

You're reading from   TensorFlow Developer Certificate Guide Efficiently tackle deep learning and ML problems to ace the Developer Certificate exam

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781803240138
Length 344 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
Oluwole Fagbohun Oluwole Fagbohun
Author Profile Icon Oluwole Fagbohun
Oluwole Fagbohun
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1 – Introduction to TensorFlow
2. Chapter 1: Introduction to Machine Learning FREE CHAPTER 3. Chapter 2: Introduction to TensorFlow 4. Chapter 3: Linear Regression with TensorFlow 5. Chapter 4: Classification with TensorFlow 6. Part 2 – Image Classification with TensorFlow
7. Chapter 5: Image Classification with Neural Networks 8. Chapter 6: Improving the Model 9. Chapter 7: Image Classification with Convolutional Neural Networks 10. Chapter 8: Handling Overfitting 11. Chapter 9: Transfer Learning 12. Part 3 – Natural Language Processing with TensorFlow
13. Chapter 10: Introduction to Natural Language Processing 14. Chapter 11: NLP with TensorFlow 15. Part 4 – Time Series with TensorFlow
16. Chapter 12: Introduction to Time Series, Sequences, and Predictions 17. Chapter 13: Time Series, Sequences, and Prediction with TensorFlow 18. Index 19. Other Books You May Enjoy

What is ML?

ML is a subfield of artificial intelligence (AI) in which computer systems learn patterns from data to perform specific tasks or make predictions on unseen data without being explicitly programmed. In 1959, Arthur Samuel defined ML as a “field of study that gives computers the ability to learn without being explicitly programmed to do so.” To give clarity to the definition given to us by Arthur Samuel, let us unpack it using a well-known use case of ML in the banking industry.

Imagine we work in the ML team of a Fortune 500 bank in the heart of London. We are saddled with the responsibility of automating the fraud detection process as the current manual process is too slow and costs the bank millions of pounds sterling, due to delays in the transaction processing time. Based on the preceding definition, we request historical data of previous transactions, containing both fraudulent and non-fraudulent transactions, after which we go through the ML life cycle (which we will cover shortly) and deploy our solution to prevent fraudulent activities.

In this example, we used historical data that provides us with the features (independent variables) needed to determine the outcome of the model, which is generally referred to as the target (dependent variable). In this scenario, the target is a fraudulent or non-fraudulent transaction, as shown in Figure 1.1.

Figure 1.1 – A flowchart showing the features and the target in our data

Figure 1.1 – A flowchart showing the features and the target in our data

In the preceding scenario, we were able to train a model with historical data made up of features and the target to generate rules that are used to make predictions on unseen data. This is an example of what ML is all about – the ability to empower computers to make decisions without explicit programming. In classic programming, as shown in Figure 1.2, we feed in the data and some hardcoded rules – for example, the volume of a daily transaction to determine whether it is fraudulent or not. If a customer goes above this daily limit, the customer’s account gets flagged, and a human moderator will intervene to decide whether the transaction was fraudulent or not.

Figure 1.2 – A traditional programming approach

Figure 1.2 – A traditional programming approach

This approach will soon leave the bank overwhelmed with unhappy customers constantly complaining about delayed transactions, while fraudsters and money launderers will evade the system by simply limiting their transactions within the daily permissible limits defined by the bank. With every new attribute, we would need to update the rules. This approach quickly becomes impractical, as there will always be something new to update to make the system tick. Like a house of cards, the system will eventually fall apart, as such a complex problem with continuously varying attributes across millions of daily transactions may be near impossible to explicitly program.

Thankfully, we do not need to hardcode anything. We can use ML to build a model that can learn to identify patterns of fraudulent transactions from historical data, based on a set of input features in it. We train our model using labeled historical data of past transactions that contain both fraudulent and non-fraudulent transactions. This allows our model to develop rules based on the data, as shown in Figure 1.3, which can be applied in the future to detect fraudulent transactions.

Figure 1.3 – An ML approach

Figure 1.3 – An ML approach

The rules generated by examining the data are used by the model to make new predictions to curb fraudulent transactions. This paradigm shift differs from traditional programming where applications are built using well-defined rules. In ML-powered applications, such as our fraud detection system, the model learns to recognize patterns and create rules from the training data; it then uses these rules to make predictions on new data to efficiently flag fraudulent transactions, as shown in Figure 1.4:

Figure 1.4 – An ML model uses rules to make predictions on unseen data

Figure 1.4 – An ML model uses rules to make predictions on unseen data

In the example we just examined, we can see from Figure 1.1 that our training data is usually structured in a tabular form, made up of numerical values such as transaction amount and frequency of transactions, as well as categorical variables such as location and payment type. In this type of data representation, we can easily identify both the features and the target. However, what about textual data from social media, images from our smartphones, video from streaming movies, and so on, as illustrated in Figure 1.5? How do we approach these types of problems where the data is unstructured? Thankfully, we have a solution, which is called deep learning.

Figure 1.5 – An illustration of structured and unstructured data types

Figure 1.5 – An illustration of structured and unstructured data types

Deep learning is a subset of ML that mimics the human brain by using complex hierarchical models that are composed of multiple processing layers. The buzz around deep learning lies around the state-of-the-art performance recorded by deep learning algorithms over the years across many real-world applications, such as object detection, image classification, and speech recognition, as deep learning algorithms can model complex relationships in data. In Sections 2 and 3, we will discuss deep learning in greater detail and see it in action for image and text applications, respectively. For now, let us probe further into the world of ML by looking at the types of ML algorithms.

You have been reading a chapter from
TensorFlow Developer Certificate Guide
Published in: Sep 2023
Publisher: Packt
ISBN-13: 9781803240138
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image