Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Cloud Scale Analytics with Azure Data Services

You're reading from   Cloud Scale Analytics with Azure Data Services Build modern data warehouses on Microsoft Azure

Arrow left icon
Product type Paperback
Published in Jul 2021
Publisher Packt
ISBN-13 9781800562936
Length 520 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Patrik Borosch Patrik Borosch
Author Profile Icon Patrik Borosch
Patrik Borosch
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Section 1: Data Warehousing and Considerations Regarding Cloud Computing
2. Chapter 1: Balancing the Benefits of Data Lakes Over Data Warehouses FREE CHAPTER 3. Chapter 2: Connecting Requirements and Technology 4. Section 2: The Storage Layer
5. Chapter 3: Understanding the Data Lake Storage Layer 6. Chapter 4: Understanding Synapse SQL Pools and SQL Options 7. Section 3: Cloud-Scale Data Integration and Data Transformation
8. Chapter 5: Integrating Data into Your Modern Data Warehouse 9. Chapter 6: Using Synapse Spark Pools 10. Chapter 7: Using Databricks Spark Clusters 11. Chapter 8: Streaming Data into Your MDWH 12. Chapter 9: Integrating Azure Cognitive Services and Machine Learning 13. Chapter 10: Loading the Presentation Layer 14. Section 4: Data Presentation, Dashboarding, and Distribution
15. Chapter 11: Developing and Maintaining the Presentation Layer 16. Chapter 12: Distributing Data 17. Chapter 13: Introducing Industry Data Models 18. Chapter 14: Establishing Data Governance 19. Other Books You May Enjoy

Using additional libraries with your Spark pool

There are so many cases where you need to rely on additional functionality from third-party libraries. Synapse Spark supports the addition of libraries to your Spark pool and will make them available when the pool is instantiated. There are different options available for you to use this functionality.

Using public libraries

In the case of PyPi packages, you would create a file named requirements.txt and add it to the configuration of your Spark pool. Within this file, you can list all the packages that you want to include upon starting a Spark instance. The format for how you name the packages follows the pip freeze format and will include the package version next to the package name:

packagename==1.2.1

The requirements.txt file can be uploaded to the Packages section of the Spark pool properties during creation. You can do this later, too, if you need to.

You'll find the location to upload your file in Figure 6.16...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image