Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Privacy-Preserving Machine Learning

You're reading from   Privacy-Preserving Machine Learning A use-case-driven approach to building and protecting ML pipelines from privacy and security threats

Arrow left icon
Product type Paperback
Published in May 2024
Publisher Packt
ISBN-13 9781800564671
Length 402 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Srinivasa Rao Aravilli Srinivasa Rao Aravilli
Author Profile Icon Srinivasa Rao Aravilli
Srinivasa Rao Aravilli
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Part 1: Introduction to Data Privacy and Machine Learning FREE CHAPTER
2. Chapter 1: Introduction to Data Privacy, Privacy Breaches, and Threat Modeling 3. Chapter 2: Machine Learning Phases and Privacy Threats/Attacks in Each Phase 4. Part 2: Use Cases of Privacy-Preserving Machine Learning and a Deep Dive into Differential Privacy
5. Chapter 3: Overview of Privacy-Preserving Data Analysis and an Introduction to Differential Privacy 6. Chapter 4: Overview of Differential Privacy Algorithms and Applications of Differential Privacy 7. Chapter 5: Developing Applications with Differential Privacy Using Open Source Frameworks 8. Part 3: Hands-On Federated Learning
9. Chapter 6: Federated Learning and Implementing FL Using Open Source Frameworks 10. Chapter 7: Federated Learning Benchmarks, Start-Ups, and the Next Opportunity 11. Part 4: Homomorphic Encryption, SMC, Confidential Computing, and LLMs
12. Chapter 8: Homomorphic Encryption and Secure Multiparty Computation 13. Chapter 9: Confidential Computing – What, Why, and the Current State 14. Chapter 10: Preserving Privacy in Large Language Models 15. Index 16. Other Books You May Enjoy

FL algorithms

FL algorithms, such as FedSGD, FedAvg, and Adaptive Federated Optimization, play a crucial role in the distributed training of ML models while ensuring privacy and security. In this section, we will explore these algorithms and their key characteristics.

FedSGD

Federated stochastic gradient descent (FedSGD) is a fundamental algorithm used in FL. It extends the traditional SGD optimization method to the federated setting. In FedSGD, each client (entity) computes the gradients on its local data and sends them to the central server. The server aggregates the gradients and updates the global model parameters accordingly. FedSGD is efficient for large-scale distributed training but may suffer from issues related to non-IID data and communication efficiency.

Figure 6.7 – The FedSGD model weights exchange with the server

Figure 6.7 – The FedSGD model weights exchange with the server

Let’s look at the FedSGD algorithm:

Server-side algorithm...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image