Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Azure Data Factory Cookbook

You're reading from   Azure Data Factory Cookbook Build and manage ETL and ELT pipelines with Microsoft Azure's serverless data integration service

Arrow left icon
Product type Paperback
Published in Dec 2020
Publisher Packt
ISBN-13 9781800565296
Length 382 pages
Edition 1st Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Dmitry Anoshin Dmitry Anoshin
Author Profile Icon Dmitry Anoshin
Dmitry Anoshin
Roman Storchak Roman Storchak
Author Profile Icon Roman Storchak
Roman Storchak
Xenia Ireton Xenia Ireton
Author Profile Icon Xenia Ireton
Xenia Ireton
Dmitry Foshin Dmitry Foshin
Author Profile Icon Dmitry Foshin
Dmitry Foshin
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Chapter 1: Getting Started with ADF 2. Chapter 2: Orchestration and Control Flow FREE CHAPTER 3. Chapter 3: Setting Up a Cloud Data Warehouse 4. Chapter 4: Working with Azure Data Lake 5. Chapter 5: Working with Big Data – HDInsight and Databricks 6. Chapter 6: Integration with MS SSIS 7. Chapter 7: Data Migration – Azure Data Factory and Other Cloud Services 8. Chapter 8: Working with Azure Services Integration 9. Chapter 9: Managing Deployment Processes with Azure DevOps 10. Chapter 10: Monitoring and Troubleshooting Data Pipelines 11. Other Books You May Enjoy

Using parameters and built-in functions

In this recipe, we shall demonstrate the power and versatility of ADF by performing a common task: importing data from several files (blobs) from a storage container into tables in Azure SQL Database. We shall create a pipeline, define datasets, and use a Copy activity to tie all the pieces together and transfer the data. We shall also see how easy it is to back up data with a quick modification to the pipeline.

Getting ready

In this recipe, we shall be using most of the services that were mentioned in the Technical requirements section of this chapter. Make sure that you have access to Azure SQL Database (with the AzureSQLDatabase instance we created) and the Azure storage account with the necessary .csv files already uploaded.

How to do it…

First, open your Azure Data Factory instance in the Azure portal and go to the Author and Monitor interface. Here, we shall define the datasets for input files and database tables, and the linked services (for Azure Blob Storage and Azure SQL Database):

  1. Start by creating linked services for the Azure storage account and AzureSQLDatabase.
  2. Create the linked service for the adforchestrationstorage storage account:

    (a) In the Manage tab, select Linked Services and click on the New button. On the New linked service blade, select Azure Blob Storage:

    Figure 2.2 – The New linked service blade

    Figure 2.2 – The New linked service blade

    (b) On the next screen, configure the linked service connection properties as shown in the following screenshot:

    Figure 2.3 – Connection configurations for Azure Blob Storage

    Figure 2.3 – Connection configurations for Azure Blob Storage

    Name your linked service according to your naming convention (in our example, we named it OrchestrationAzureBlobStorage1).

    Select the appropriate subscription and enter the name of your storage account (where you store the .csv files).

    For Integration Runtime, select AutoResolveIntegrationRuntime.

    For Authentication method, select Account Key.

    Note:

    In this recipe, we are using Account Key authentication to access our storage account. However, in your work environment, it is recommended to authenticate using Managed Identity, taking advantage of the Azure Active Directory service. This is more secure and allows you to avoid using credentials in your code. Find the references for more information about using Managed Identity with Azure Data Factory in the See also section of this recipe.

    (c) Click the Test Connection button at the bottom and verify that you can connect to the storage account.

    (d) Finally, click on the Create button and wait for the linked service to be created.

  3. Create the second linked service for AzureSQLDatabase:
    Figure 2.4 – Connection properties for Azure SQL Database

    Figure 2.4 – Connection properties for Azure SQL Database

    (a) In the Manage tab, create a new linked service, but this time select Azure SQL from choices in the New linked service blade. You can enter Azure SQL into the search field to find it easily.

    (b) Select the subscription information and the SQL server name (the dropdown will present you with choices). Once you have selected the SQL server name, you can select your database (AzureSQLDatabase) from the dropdown in the Database Name section.

    (c) Select SQL Authentication for Authentication Type. Enter the username and password for your database. 

    (d) Make sure to test the connection. If the connection fails, ensure that you configured access correctly in Firewall and Network Settings. Once you have successfully tested the connection, click on Create to save your linked service.

    Now, we shall create two datasets, one for each linked service.

  4. In the Author tab, define the dataset for Azure Storage as shown in the following screenshot:
    Figure 2.5 – Create a new dataset

    Figure 2.5 – Create a new dataset

    (a) Go to Datasets and click on New dataset. Select Azure Blob Storage from the choices and click Continue.

    (b) In the Select Format blade, select Delimited Text and hit Continue.

    (c) Call your new dataset CsvData and select OrchestrationAzureBlobStorage in the Linked Service dropdown.

    (d) With the help of the folder button, navigate to your Azure folder and select any file from there to specify the file path:

    Figure 2.6 – Dataset properties

    Figure 2.6 – Dataset properties

    (e) Check the First Row as Header checkbox and click on Create.

  5. In the same Author tab, create a dataset for the Azure SQL table:

    (a) Go to Datasets and click on New dataset.

    (b) Select Azure SQL from the choices in the New Dataset blade.

    (c) Name your dataset AzureSQLTables.

    (d) In the Linked Service dropdown, select AzureSQLDatabase. For the table name, select Country from the dropdown.

    (e) Click on Create.

  6. Parameterize the AzureSQLTables dataset:

    (a) In the Parameters tab, enter the name of your new parameter, tableName:

    Figure 2.7 – Parameterizing the dataset

    Figure 2.7 – Parameterizing the dataset

    (b) Next, in the Connection tab, click on the Edit checkbox and enter dbo as schema and @dataset().tableName in the table text field, as shown in the following screenshot:

    Figure 2.8 – Specifying a value for the dataset parameter

    Figure 2.8 – Specifying a value for the dataset parameter

  7. In the same way, parameterize and add dynamic content in the Connection tab for the CsvData dataset:

    (a) Select your dataset, open the Parameters tab, and create a parameter named filename.

    (b) In the Connections tab, in the File Path section, click inside the File text box, then click on the Add Dynamic Content link. This will bring up the Dynamic Content interface. In that interface, find the Parameters section and click on filename. This will generate the correct code to refer to the dataset's filename parameter in the dynamic content text box:

    Figure 2.9 – Dynamic content interface

    Figure 2.9 – Dynamic content interface

    Click on the Finish button to finalize your choice.

    Verify that you can see both datasets under the Datasets tab:

    Figure 2.10 – Datasets in the Author tab

    Figure 2.10 – Datasets in the Author tab

  8. We are now ready to design the pipeline.

    In the Author tab, create a new pipeline. Change its name to pl_orchestration_recipe_1.

  9. From the Move and Transform menu in the Activities pane (on the left), drag a Copy activity onto the canvas:
    Figure 2.11 – Pipeline canvas with a Copy activity

    Figure 2.11 – Pipeline canvas with a Copy activity

    On the bottom of the canvas, you will see some tabs: General, Source, Sink, and so on. Configure your Copy activity.

    In the General tab, you can configure the name for your activity. Call it Copy From Blob to Azure SQL.

    In the Source tab, select the CsvData dataset and specify countries.csv in the filename textbox.

    In the Sink tab, select the AzureSQLTables dataset and specify Country in the tableName text field.

  10. We are ready to run the pipeline in Debug mode:

    Note

    You will learn more about using the debug capabilities of Azure Data Factory in Chapter 10, Monitoring and Troubleshooting Data Pipelines. In this recipe, we introduce you to the Output pane, which will help you understand the design and function of this pipeline.

    (a) Click the Debug button in the top panel. This will run your pipeline.

    (b) Put your cursor anywhere on the pipeline canvas. You will see the report with the status of the activities in the bottom panel in the Output tab:

    Figure 2.12 – Debug output

    Figure 2.12 – Debug output

    Hover your cursor over the row representing the activity to see the inputs and outputs buttons. We shall make use of these in later chapters.

    After your pipeline has run, you should see that the dbo.Country table in your Azure SQL database has been populated with the countries data:

    Figure 2.13 – Contents of the Country table in Azure SQL Database

    Figure 2.13 – Contents of the Country table in Azure SQL Database

    We have copied the contents of the Countries.csv file into the database. In the next steps, we shall demonstrate how parameterizing the datasets gives us the flexibility to define which file we want to copy and which SQL table we want as the destination without redesigning the pipeline.

  11. Edit the pipeline: click on the Copy from Blob To Azure SQL activity to select it, and specify airlines.csv for the filename in the Source tab and Airline for the table name in the Sink tab. Run your pipeline again (in Debug mode), and you should see that the second table is populated with the data – using the same pipeline!
  12. Now, let's say we want to back up the contents of the tables in an Azure SQL database before overwriting them with data from .csv files. We can easily enhance the existing pipeline to accomplish this.
  13. Drag another instance of the Copy activity from the Activities pane, name it Backup Copy Activity, and configure it in the following way:

    (a) For the source, select AzureSQLDatabase for the linked service, and add Airline in the text box for the table name.

    (b) In Sink, specify CsvData as the linked service, and enter the following formula into the filename textbox: @concat('Airlines-', utcnow(), '.backup' ).

    (c) Connect Backup Copy Activity to the Copy from Blob to AzureSQL copy activity:

    Figure 2.14 – Adding backup functionality to the pipeline

    Figure 2.14 – Adding backup functionality to the pipeline

  14. Run the pipeline in debug mode. After the run is complete, you should see the backup file in your storage account.
  15. We have created two linked services and two datasets, and we have a functioning pipeline. Click on the Publish All button at the top to save your work.

Let's look at how this works!

How it works…

In this recipe, we became familiar with all the major components of an Azure Data Factory pipeline: linked services, datasets, and activities:

  • Linked services represent configured connections between your Data Factory instance and the service that you want to use.
  • Datasets are more granular: they represent the specific view of the data that your activities will use as input and output.
  • Activities represent the actions that are performed on the data. Many activities require you to specify where the data is extracted from and where it is loaded to. The ADF terms for these entities are source and sink.

Every pipeline that you design will have those components.

In step 1 and step 2, we created the linked services to connect to Azure Blob Storage and Azure SQL Database. Then, in step 3 and step 4, we created datasets that connected to those linked services and referred to specific files or tables. We created parameters that represented the data we referred to in step 5 and step 6, and this allowed us to change which files we wanted to load into tables without creating additional pipelines. In the remaining steps, we worked with instances of the Copy activity, specifying the inputs and outputs (sources and sinks) for the data.

There's more…

We used a built-in function for generating UTC timestamps in step 12. Data Factory provides many convenient built-in functions and expressions, as well as system variables, for your use. To see them, click on Backup SQL Data activity in your pipeline and go to the Source tab below it. Put your cursor inside the tableName text field. You will see an Add dynamic content link appear underneath. Click on it, and you will see the Add dynamic content blade:

Figure 2.15 – Data Factory functions and system variables

Figure 2.15 – Data Factory functions and system variables

This blade lists many useful functions and system variables to explore. We will use some of them in later recipes.

See also

Microsoft keeps extensive documentation on Data Factory. For a more detailed explanation of the concepts used in this recipe, refer to the following pages:

You have been reading a chapter from
Azure Data Factory Cookbook
Published in: Dec 2020
Publisher: Packt
ISBN-13: 9781800565296
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image