Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Learning Microsoft Cognitive Services
Learning Microsoft Cognitive Services

Learning Microsoft Cognitive Services: Use Cognitive Services APIs to add AI capabilities to your applications , Third Edition

Arrow left icon
Profile Icon Leif Larsen Henning Larsen
Arrow right icon
₱1399.99 ₱2000.99
eBook Sep 2018 312 pages 3rd Edition
eBook
₱1399.99 ₱2000.99
Paperback
₱2500.99
Subscription
Free Trial
Arrow left icon
Profile Icon Leif Larsen Henning Larsen
Arrow right icon
₱1399.99 ₱2000.99
eBook Sep 2018 312 pages 3rd Edition
eBook
₱1399.99 ₱2000.99
Paperback
₱2500.99
Subscription
Free Trial
eBook
₱1399.99 ₱2000.99
Paperback
₱2500.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Table of content icon View table of contents Preview book icon Preview Book

Learning Microsoft Cognitive Services

Chapter 1. Getting Started with Microsoft Cognitive Services

You have just started on the road to learning about Microsoft Cognitive Services. This chapter will serve as a gentle introduction to the services that it offers. The end goal is to understand a bit more about what these Cognitive Services APIs can do for you. By the end of this chapter, we will have created an easy-to-use project template. You will have learned how to detect faces in images and have the number of faces spoken back to you.

Throughout this chapter, we will cover the following topics:

  • Applications that already use Microsoft Cognitive Services
  • Creating a template project
  • Detecting faces in images using a Face API
  • Discovering what Microsoft Cognitive Services can offer
  • Doing text-to-speech conversion using the Bing Speech API

Cognitive Services in action for fun and life-changing purposes

The best way to introduce Microsoft Cognitive Services is to see how it can be used in action. Microsoft (as well as other companies) has created a lot of example applications to show off its capabilities. Several may be seen as silly, such as the How-Old.net (http://how-old.net/) image analysis and the what if I were that person application. These applications have generated quite some buzz, and they show off some of the APIs in a good way.

The one demonstration that is truly inspiring, though, is the one featuring a visually impaired person. Talking computers inspired him to create an application to allow blind and visually impaired people to understand what is going on around them. The application has been built upon Microsoft Cognitive Services. It gives us a good idea of how these APIs can be used to change the world, for the better. Before moving on, head over to https://www.youtube.com/watch?v=R2mC-NUAmMk and take a peek into the world of Microsoft Cognitive Services.

Setting up the boilerplate code

Before we start diving into the action, we will go through some initial setup. More to the point, we will set up some boilerplate code that we will utilize throughout this book.

To get started, you will need to install a version of Visual Studio, preferably Visual Studio 2015 or later. The Community Edition will work fine for this purpose. You do not need anything more than what the default installation offers.

Throughout this book, we will utilize the different APIs to build a smart-house application. The application will be created to see how a futuristic house might appear. If you have seen the Iron Man movies, you can think of the application as resembling Jarvis, in some ways.

In addition, we will be making smaller sample applications using the Cognitive Services APIs. Doing so will allow us to look at each API, even those that did not make it to the final application.

What's common with all the applications that we will build is that they will be Windows Presentation Foundation (WPF) applications. This is fairly well known, and allows us to build applications using the Model-View-ViewModel (MVVM) pattern. One of the advantages of taking this road is that we will be able to see the API usage quite clearly. It also separates code so that you can bring the API logic to other applications with ease.

The following steps describe the process of creating a new WPF project:

  1. Open Visual Studio and select File | New | Project.
  2. In the dialog, select the WPF Application option from Templates | Visual C#, as shown in the following screenshot:
    Setting up the boilerplate code
  3. Delete the MainWindow.xaml file and create the files and folders that are shown in the following screenshot:
    Setting up the boilerplate code

We will not go through the MVVM pattern in detail, as this is out of the scope of this book. The key takeaway from the screenshot is that we have separated the View from what becomes the logic. We then rely on the ViewModel to connect the pieces.

Note

If you want to learn more about MVVM, I recommend reading http://www.codeproject.com/Articles/100175/Model-View-ViewModel-MVVM-Explained.

To be able to run this, however, we do need to set up our project. Go through the following steps:

  1. Open the App.xaml file and make sure the StartupUri is set to the correct View, as shown in the following code (class name and namespace may vary based on the name of your application):
        <Application x:Class="Chapter1.App"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x = "http://schemas.microsoft.com/winfx/2006/xaml"
        xmlns:local="clr-namespace:Chapter1"
        StartupUri="View/MainView.xaml">
  2. Open the MainViewModel.cs file and make it inherit from the ObservableObject class.
  3. Open the MainView.xaml file and add the MainViewModel file as DataContext to it, as shown in the following code (namespace and class names may vary based on the name of your application):
            <Window x:Class="Chapter1.View.MainView"
            
                xmlns="http://schemas.microsoft.com/
    winfx/2006/xaml/presentation"
                xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                xmlns:d="http://schemas.microsoft.com/
    expression/blend/2008"
                xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
                xmlns:local="clr-namespace:Chapter1.View"
                xmlns:viewmodel="clr-namespace:Chapter1.ViewModel" mc:Ignorable="d"
                Title="Chapter 1" Height="300" Width="300">
                <Window.DataContext>
                    <viewmodel:MainViewModel />
                </Window.DataContext>

Following this, we need to fill in the content of the ObservableObject.cs file. We start off by having it inherit from the INotifyPropertyChanged class as follows:

        public class ObservableObject : INotifyPropertyChanged

This is a rather small class, which should contain the following:

        public event PropertyChangedEventHandlerPropertyChanged;
        protected void RaisePropertyChangedEvent(string propertyName)
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }

We declare a property changed event and create a function to raise the event. This will allow the user interface (UI) to update its values when a given property has changed.

We also need to be able to execute actions when buttons are clicked. This can be achieved when we put some content into the DelegateCommand.cs file. Start by making the class inherit the ICommand class, and declare the following two variables:

        public class DelegateCommand : ICommand
        {
            private readonly Predicate<object> _canExecute;
            private readonly Action<object> _execute;

The two variables we have created will be set in the constructor. As you will notice, you are not required to add the _canExecute parameter, and you will see why in a bit:

            public DelegateCommand(Action<object> execute, Predicate<object> canExecute = null)
            {
                _execute = execute;
                _canExecute = canExecute;
            }

To complete the class, we add two public functions and one public event, as follows:

        public bool CanExecute(object parameter)
        {
            if (_canExecute == null) return true;
            return _canExecute(parameter);
        }

        public void Execute(object parameter)
        {
            _execute(parameter);
        }
  
        public event EventHandlerCanExecuteChanged
        {
            add
            {
                CommandManager.RequerySuggested += value;
            }
            remove
            {
                CommandManager.RequerySuggested -= value;
            }
        }
    }

The functions declared will return the corresponding predicate, or action, declared in the constructor. This will be something we declare in our ViewModel instances, which, in turn, will be something that executes an action or tells the application that it can or cannot execute an action. If a button is in a state where it is disabled (that is, when the CanExecute function returns false) and the state of the CanExecute function changes, the event that is declared will let the button know.

With that in place, you should be able to compile and run the application, so go on and try that. You will notice that the application does not actually do anything or present any data yet, but we have an excellent starting point.

Before we do anything else with the code, we are going to export the project as a template using the following steps. This is so that we do not have to redo all these steps for each small sample project we create:

  1. Replace the namespace names with substitute parameters:
  2. In all the .cs files, replace the namespace name with $safeprojectname$
  3. In all the .xaml files, replace the project name with $safeprojectname$ where applicable (typically the class name and namespace declarations)
  4. Navigate to File | Export Template. This will open the Export Template wizard, as shown in the following screenshot:
    Setting up the boilerplate code
  5. Click on the Project Template button. Select the project we just created and click on the Next button.
  6. Just leave the icon and preview image empty. Enter a recognizable name and description. Click on the Finish button:
    Setting up the boilerplate code
  7. The template is now exported to a .zip file and stored in the specified location.

By default, the template will be imported into Visual Studio again. We are going to test that it works immediately by creating a project for this chapter. So go ahead and create a new project, selecting the template that we just created. The template should be listed in the Visual C# section of the installed templates list. Call the project Chapter1, or something else, if you prefer. Make sure it compiles and that you are able to run it before we move to the next step.

Detecting faces with the Face API

With the newly created project, we will now try our first API, the Face API. We will not be doing a whole lot, but we will still see how simple it is to detect faces in images.

The steps we need to go through in order to do this are as follows:

  1. Register for a Face API preview subscription at Microsoft Azure
  2. Add the necessary NuGet packages to our project
  3. Add a UI to the application
  4. Detect faces on command

Head over to https://portal.azure.com to start the process of registering for a free subscription to the Face API. You will be taken to a login page. Log on with your Microsoft account; if you do not have one, then register for one.

Once logged in, you will need to add a new resource by clicking on + New on the right-hand menu. Search for Face API and select the first entry, as shown in the following screenshot:

Detecting faces with the Face API

Enter a name and select the subscription, location, and pricing tier. At the time of writing, there are two pricing options, one free and one paid, as shown in the following screenshot:

Detecting faces with the Face API

Once created, you can go into the newly created resource. You will need one of the two available API keys. These can be found in the Keys option of the Resource Management menu, as shown in the following screenshot:

Detecting faces with the Face API

Some of the APIs that we will cover have their own NuGet packages created. Whenever this is the case, we will utilize those packages to do the operations we want to perform. A common feature of all APIs is that they are REST APIs, which means that in practice you can use them with whichever language you want. For those APIs that do not have their own NuGet package, we call the APIs directly through HTTP.

A NuGet package does exist for the Face API we are using now, so we need to add that to our project. Head over to the NuGet Package Manager option for the project we created earlier. In the Browse tab, search for the Microsoft.ProjectOxford.Face package and install the package from Microsoft, as shown in the following screenshot:

Detecting faces with the Face API

As you will notice, another package will also be installed. This is the Newtonsoft.Json package, which is required by the Face API.

The next step is to add a UI to our application. We will be adding this in the MainView.xaml file. Open this file where the template code that we created earlier should be. This means that we have DataContext, and can make bindings for our elements, which we will define now.

First, we add a grid and define some rows for the grid, as follows:

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="*" />
            <RowDefinition Height="20" />
            <RowDefinition Height="30" />
        </Grid.RowDefinitions>

Three rows are defined. The first is a row where we will have an image, the second is a line for the status message, and the last is where we will place some buttons.

Next, we add our image element, as follows:

        <Image x:Name="FaceImage" Stretch="Uniform" Source=
            "{Binding ImageSource}" Grid.Row="0" />

We have given it a unique name. By setting the Stretch parameter to Uniform, we ensure that the image keeps its aspect ratio. Further on, we place this element in the first row. Last, we bind the image source to a BitmapImage in the ViewModel, which we will look at in a bit.

The next row will contain a text block with some status text. The Text property will be bound to a string property in the ViewModel, as follows:

        <TextBlockx:Name="StatusTextBlock" Text=
            "{Binding StatusText}" Grid.Row="1" />

The last row will contain one button to browse for an image and one button to be able to detect faces. The command properties of both buttons will be bound to the DelegateCommand properties in the ViewModel, as follows:

        <Button x:Name = "BrowseButton"
                  Content = "Browse" Height="20" Width="140" 
                  HorizontalAlignment = "Left"
                  Command="{Binding BrowseButtonCommand}"
                  Margin="5, 0, 0, 5"Grid.Row="2" />

        <Button x:Name="DetectFaceButton"
                  Content="Detect face" Height="20" Width="140"
                  HorizontalAlignment="Right"
                  Command="{Binding DetectFaceCommand}"
                  Margin="0, 0, 5, 5"Grid.Row="2"/>

With the View in place, make sure that the code compiles and runs it. This should present you with the following UI:

Detecting faces with the Face API

The last part of the process is to create the binding properties in our ViewModel and make the buttons execute something. Open the MainViewModel.cs file. The class should already inherit from the ObservableObject class. First, we define two variables as follows:

    private string _filePath;
    private IFaceServiceClient _faceServiceClient;

The string variable will hold the path to our image, while the IFaceServiceClient variable is to interface the Face API. Next, we define two properties, as follows:

    private BitmapImage _imageSource;
    public BitmapImageImageSource
    {
        get { return _imageSource; }
        set
        {
            _imageSource = value;
            RaisePropertyChangedEvent("ImageSource");
        }
    }

    private string _statusText;
    public string StatusText
    {
        get { return _statusText; }
        set
        {
           _statusText = value;
           RaisePropertyChangedEvent("StatusText");
        }
    }

What we have here is a property for the BitmapImage, mapped to the Image element in the View. We also have a string property for the status text, mapped to the text block element in the View. As you may also notice, when either of the properties is set, we call the RaisePropertyChangedEvent event. This will ensure that the UI updates when either property has new values.

Next, we define our two DelegateCommand objects and perform some initialization through the constructor, as follows:

    public ICommandBrowseButtonCommand { get; private set; }
    public ICommandDetectFaceCommand { get; private set; }

    public MainViewModel()
    {
        StatusText = "Status: Waiting for image...";

        _faceServiceClient = new FaceServiceClient("YOUR_API_KEY_HERE", "ROOT_URI);

        BrowseButtonCommand = new DelegateCommand(Browse);
        DetectFaceCommand = new DelegateCommand(DetectFace, CanDetectFace);
    }

The properties for the commands are public to get, but private to set. This means that we can only set them from within the ViewModel. In our constructor, we start off by setting the status text. Next, we create an object of the Face API, which needs to be created with the API key we got earlier. In addition, it needs to specify the root URI, pointing at the location of the service. It can, for instance, be https://westeurope.api.cognitive.microsoft.com/face/v1.0 if the service is located in west Europe.

If the service is located in the west US, you would replace westeurope with westus. The root URI can be found in the following place in Azure Portal:

Detecting faces with the Face API

At last, we create the DelegateCommand constructor for our command properties. Note how the browse command does not specify a predicate. This means that it will always be possible to click on the corresponding button. To make this compile, we need to create the functions specified in the DelegateCommand constructors: the Browse, DetectFace, and CanDetectFace functions.

We start the Browse function by creating an OpenFileDialog object. This dialog is assigned a filter for JPEG images, and, in turn, it is opened, as shown in the following code. When the dialog is closed, we check the result. If the dialog was canceled, we simply stop further execution:

    private void Browse(object obj)
    {
        var openDialog = new Microsoft.Win32.OpenFileDialog();
        openDialog.Filter = "JPEG Image(*.jpg)|*.jpg";
        bool? result = openDialog.ShowDialog();

        if (!(bool)result) return;

With the dialog closed, we grab the filename of the file selected and create a new URI from it, as shown in the following code:

        _filePath = openDialog.FileName;
        Uri fileUri = new Uri(_filePath);

With the newly created URI, we want to create a new BitmapImage. We specify it so that it uses no cache, and we set the URI source of the URI that we created, as shown in the following code:

        BitmapImage image = new BitmapImage(fileUri);

        image.CacheOption = BitmapCacheOption.None;
        image.UriSource = fileUri;

The last step we take is to assign the bitmap image to our BitmapImage property so that the image is shown in the UI. We also update the status text to let the user know that the image has been loaded, as shown in the following code:

        ImageSource = image;
        StatusText = "Status: Image loaded...";
    }

The CanDetectFace function checks whether or not the DetectFacesButton button should be enabled. In this case, it checks whether our image property actually has a URI. If it does by extension, then that means that we have an image and we should be able to detect faces, as shown in the following code:

    private boolCanDetectFace(object obj)
    {
        return !string.IsNullOrEmpty(ImageSource?.UriSource.ToString());
    }

Our DetectFace method calls an async method to upload and detect faces. The return value contains an array of the FaceRectangles variable. This array contains the rectangle area for all face positions in the given image. We will look into the function that we are going to call in a bit.

After the call has finished executing, we print a line containing the number of faces to the debug console window, as follows:

    private async void DetectFace(object obj)
    {
        FaceRectangle[] faceRects = await UploadAndDetectFacesAsync();

        string textToSpeak = "No faces detected";

        if (faceRects.Length == 1)
            textToSpeak = "1 face detected";
        else if (faceRects.Length> 1)
            textToSpeak = $"{faceRects.Length} faces detected";

        Debug.WriteLine(textToSpeak);
    }

In the UploadAndDetectFacesAsync function, we create a Stream from the image, as shown in the following code. This stream will be used as input for the actual call to the Face API service:

    private async Task<FaceRectangle[]>UploadAndDetectFacesAsync()
    {
        StatusText = "Status: Detecting faces...";

        try
        {
            using (Stream imageFileStream = File.OpenRead(_filePath))

The following line is the actual call to the detection endpoint for the Face API:

            Face[] faces = await _faceServiceClient.DetectAsync(imageFileStream, true, true, new List<FaceAttributeType>() { FaceAttributeType.Age });

The first parameter is the file stream that we created in the previous step. The rest of the parameters are all optional. The second parameter should be true if you want to get a face ID. The next parameter specifies whether you want to receive face landmarks or not. The last parameter takes a list of facial attributes that you may want to receive. In our case, we want the age parameter to be returned, so we need to specify that.

The return type of this function call is an array of faces, with all the parameters that you have specified, as shown in the following code:

            List<double> ages = faces.Select(face =>face.FaceAttributes.Age).ToList();
            FaceRectangle[] faceRects = faces.Select(face =>face.FaceRectangle).ToArray();

            StatusText = "Status: Finished detecting faces...";

            foreach(var age in ages) {
                Console.WriteLine(age);
            }
            return faceRects;
        }
    }

The first line iterates over all faces and retrieves the approximate age of all faces. This is later printed to the debug console window, in the foreach loop.

The second line iterates over all faces and retrieves the face rectangle, with the rectangular location of all faces. This is the data that we return to the calling function.

Add a catch clause to finish the method. Where an exception is thrown in our API call, we catch that. We want to show the error message and return an empty FaceRectangle array.

With that code in place, you should now be able to run the full example. The end result will look like the following screenshot:

Detecting faces with the Face API

The result debug console window will print the following text:

    1 face detected
    23,7

An overview of different APIs

Now that you have seen a basic example of how to detect faces, it is time to learn a bit about what else Cognitive Services can do for you. When using Cognitive Services, you have 21 different APIs to hand. These are, in turn, separated into five top-level domains depending on what they do. These domains are vision, speech, language, knowledge, and search. We will learn more about them in the following sections.

Vision

APIs under the vision flags allow your apps to understand images and video content. They allow you to retrieve information about faces, feelings, and other visual content. You can stabilize videos and recognize celebrities. You can read text in images and generate thumbnails from videos and images.

There are four APIs contained in the vision domain, which we will look at now.

Computer vision

Using the computer vision API, you can retrieve actionable information from images. This means that you can identify content (such as image format, image size, colors, faces, and more). You can detect whether or not an image is adult/racy. This API can recognize text in images and extract it to machine-readable words. It can detect celebrities from a variety of areas. Lastly, it can generate storage-efficient thumbnails with smart-cropping functionality.

We will look into computer vision in Chapter 2, Analyzing Images to Recognize a Face.

Face

We have already seen a very basic example of what the Face API can do. The rest of the API revolves around the detection, identification, organization, and tagging of faces in photos. As well as face detection, you can also see how likely it is that two faces belong to the same person. You can identify faces and also find similar-looking faces. We can also use the API to recognize emotions in images.

We will dive further into the Face API in Chapter 2, Analyzing Images to Recognize a Face.

Video indexer

Using the video indexer API, you can start indexing videos immediately upon upload. This means that you can get video insights without using experts or custom code. Content discovery can be improved, utilizing the powerful artificial intelligence of this API. This allows you to make your content more discoverable.

The video indexer API will be covered in greater detail in Chapter 3, Analyzing Videos.

Content moderator

The content moderator API utilizes machine learning to automatically moderate content. It can detect potentially offensive and unwanted images, videos, and text for over 100 languages. In addition, it allows you to review detected material to improve the service.

The content moderator will be covered in Chapter 2, Analyzing Images to Recognize a Face.

Custom vision service

The custom vision service allows you to upload your own labeled images to a vision service. This means that you can add images that are specific to your domain to allow recognition using the computer vision API.

The custom vision service will be covered in more detail in Chapter 2, Analyzing Images to Recognize a Face.

Speech

Adding one of the Speech APIs allows your application to hear and speak to your users. The APIs can filter noise and identify speakers. Based on the recognized intent, they can drive further actions in your application.

The speech domain contains three APIs that are outlined in the following sections.

Bing Speech

Adding the Bing Speech API to your application allows you to convert speech to text and vice versa. You can convert spoken audio to text either by utilizing a microphone or other sources in real time or by converting audio from files. The API also offers speech intent recognition, which is trained by the Language Understanding Intelligent Service (LUIS) to understand the intent.

Speaker recognition

The speaker recognition API gives your application the ability to know who is talking. By using this API, you can verify that the person that is speaking is who they claim to be. You can also determine who an unknown speaker is based on a group of selected speakers.

Translator speech API

The translator speech API is a cloud-based automatic translation service for spoken audio. Using this API, you can add end-to-end translation across web apps, mobile apps, and desktop applications. Depending on your use cases, it can provide you with partial translations, full translations, and transcripts of the translations cover all speech-related APIs in Chapter 5, Speak with Your Application.

Language

APIs that are related to the language domain allow your application to process natural language and learn how to recognize what users want. You can add textual and linguistic analysis to your application, as well as natural language understanding.

The following five APIs can be found in the language domain.

Bing Spell Check

The Bing Spell Check API allows you to add advanced spell checking to your application.

This API will be covered in Chapter 6, Understanding Text.

Language Understanding Intelligent Service (LUIS)

LUIS is an API that can help your application understand commands from your users. Using this API, you can create language models that understand intents. By using models from Bing and Cortana, you can make these models recognize common requests and entities (such as places, times, and numbers). You can add conversational intelligence to your applications.

LUIS will be covered in Chapter 4, Let Applications Understand Commands.

Text analytics

The text analytics API will help you in extracting information from text. You can use it to find the sentiment of a text (whether the text is positive or negative), and will also be able to detect the language, topic, key phrases, and entities that are used throughout the will also cover the text analysis API in Chapter 6, Understanding Text.

Translator Text API

By adding the translator text API, you can get textual translations for over 60 languages. It can detect languages automatically, and you can customize the API to your needs. In addition, you can improve translations by creating user groups, utilizing the power of crowdsourcing.

The translator text API will not be covered in this book.

Knowledge

When we talk about knowledge APIs, we are talking about APIs that allow you to tap into rich knowledge. This may be knowledge from the web or from academia, or it may be your own data. Using these APIs, you will be able to explore the different nuances of knowledge.

The following four APIs are contained in the knowledge API domain.

Project Academic Knowledge

Using the Project Academic Knowledge API, you can explore relationships among academic papers, journals, and authors. This API allows you to interpret natural language user query strings, which allows your application to anticipate what the user is typing. It will evaluate what is being typed and return academic knowledge entities.

This API will be covered in more detail in Chapter 8, Query Structured Data in a Natural Way.

Knowledge exploration

The knowledge exploration API will let you add the possibility of using interactive searches for structured data in your projects. It interprets natural language queries and offers autocompletions to minimize user effort. Based on the query expression received, it will retrieve detailed information about matching objects.

Details on this API will be covered in Chapter 8, Query Structured Data in a Natural Way.

Recommendations solution

The recommendations solution API allows you to provide personalized product recommendations for your customers. You can use this API to add a frequently-bought-together functionality to your application. Another feature that you can add is item-to-item recommendations, which allows customers to see what other customers like. This API will also allow you to add recommendations based on the prior activity of the customer.

We will go through this API in Chapter 7, Building Recommendation Systems for Businesses.

QnA Maker

The QnA Maker is a service to distill information for frequently asked questions (FAQ). Using existing FAQs, either online or in a document, you can create question and answer pairs. Pairs can be edited, removed, and modified, and you can add several similar questions to match a given pair.

We will cover QnA Maker in Chapter 8, Query Structured Data in a Natural Way.

Project Custom Decision Service

Project Custom Decision Service is a service designed to use reinforced learning to personalize content. The service understands any context and can provide context-based content.

This book does not cover Project Custom Decision Service.

Search

Search APIs give you the ability to make your applications more intelligent with the power of Bing. Using these APIs, you can use a single call to access data from billions of web pages, images, videos, and news articles.

The search domain contains the following APIs.

Bing Web Search

With Bing Web Search, you can search for details in billions of web documents that are indexed by Bing. All the results can be arranged and ordered according to a layout that you specify, and the results are customized to the location of the end user.

Bing Web Search will be covered in Chapter 9, Adding Specialized Search.

Bing Image Search

Using the Bing Image Search API, you can add an advanced image and metadata search to your application. Results include URLs to images, thumbnails, and metadata. You will also be able to get machine-generated captions, similar images, and more. This API allows you to filter the results based on image type, layout, freshness (how new the image is), and license. Bing Image Search will be covered in Chapter 9, Adding Specialized Search.

Bing Video Search

Bing Video Search will allow you to search for videos and return rich results. The results could contain metadata from the videos, static or motion-based thumbnails, and the video itself. You can add filters to the results based on freshness, video length, resolution, and price.

Bing Video Search will be covered in Chapter 9, Adding Specialized Search.

Bing News Search

If you add Bing News Search to your application, you can search for news articles. Results can include authoritative images, related news and categories, information on the provider, URLs, and more. To be more specific, you can filter news based on topics.

Bing News Search will be covered in Chapter 9, Adding Specialized Search.

Bing Autosuggest

The Bing Autosuggest API is a small but powerful one. It will allow your users to search faster using their search suggestions, allowing you to connect a powerful search functionality to your apps.

Bing Autosuggest will be covered in Chapter 9, Adding Specialized Search.

Bing Visual Search

Using the Bing Visual Search API, you can identify and classify images. You can also acquire knowledge about images.

Bing Visual Search will be covered in Chapter 9, Adding Specialized Search.

Bing Custom Search

By utilizing the Bing Custom Search API, you can create a powerful, customized search that fits your needs. This tool is an ad-free commercial tool that allows you to deliver the search results you want.

Bing Custom Search will be covered in Chapter 9, Adding Specialized Search.

Bing Entity Search

Using the Bing Entity Search API, you can enhance your searches. The API will find the most relevant entity based on your search terms. It will find entities such as famous people, places, movies, and more.

We will not cover Bing Entity Search in this book.

Getting feedback on detected faces

Now that we have seen what else Microsoft Cognitive Services can offer, we are going to add an API to our face detection application. In this section, we will add the Bing Speech API to make the application say the number of faces out loud.

This feature of the API is not provided in the NuGet package, and as such, we are going to use the REST API.

To reach our end goal, we are going to add two new classes, TextToSpeak and Authentication. The first class will be in charge of generating the correct headers and making the calls to our service endpoint. The latter class will be in charge of generating an authentication token. This will be tied together in our ViewModel, where we will make the application speak back to us.

We need to get our hands on an API key first. Head over to the Microsoft Azure Portal. Create a new service for Bing Speech.

To be able to call the Bing Speech API, we need to have an authorization token. Go back to Visual Studio and create a new file called Authentication.cs. Place this in the Model folder.

We need to add two new references to the project. Find the System.Runtime.Serialization and System.Web packages in the Assembly tab in the Add References window and add them.

In our Authentication class, define four private variables and one public property, as follows:

    private string _requestDetails;
    private string _token;
    private Timer _tokenRenewer;

    private const int TokenRefreshInterval = 9;

    public string Token { get { return _token; } }

The constructor should accept one string parameter, clientSecret. The clientSecret parameter is the API key you signed up for.

In the constructor, assign the _clientSecret variable, as follows:

    _clientSecret = clientSecret;

Create a new function called Initialize, as follows:

    public async Task Initialize()
    {
        _token = GetToken();

        _tokenRenewer = new Timer(new TimerCallback(OnTokenExpiredCallback), this,
        TimeSpan.FromMinutes(TokenRefreshInterval),
        TimeSpan.FromMilliseconds(-1));
    }

We then fetch the access token in a method that we will create shortly.

Finally, we create our timer class, which will call the callback function in nine minutes. The callback function will need to fetch the access token again and assign it to the _token variable. It also needs to ensure that we run the timer again in nine minutes.

Next, we need to create the GetToken method. This method should return a Task<string>object, and it should be declared as private and marked as async.

In the method, we start by creating an HttpClient object, pointing to an endpoint that will generate our token. We specify the root endpoint and add the token issue path, as follows:

    using(var client = new HttpClient())
    {
        client.DefaultRequestHeaders.Add ("Opc-Apim-Subscription-Key", _clientSecret);
        UriBuilder uriBuilder = new UriBuilder (https://api.cognitive.microsoft.com/sts/v1.0");
        uriBuilder.Path = "/issueToken";

We then go on to make a POST call to generate a token, as follows:

var result = await client.PostAsync(uriBuilder.Uri.AbsoluteUri, null);

When the request has been sent, we expect there to be a response. We want to read this response and return the response string:

return await result.Content.ReadAsStringAsync();

Add a new file called TextToSpeak.cs, if you have not already done so. Put this file in the Model folder.

Beneath the newly created class (but inside the namespace), we want to add two event argument classes. These will be used to handle audio events, which we will see later.

The AudioEventArgs class simply takes a generic stream, as shown in the following code. You can imagine it being used to send the audio stream to our application:

    public class AudioEventArgs : EventArgs
    {
        public AudioEventArgs(Stream eventData)
        {
            EventData = eventData;
        }

        public StreamEventData { get; private set; }
    }

The next class allows us to send an event with a specific error message:

    public class AudioErrorEventArgs : EventArgs
    {
        public AudioErrorEventArgs(string message)
        {
            ErrorMessage = message;
        }

        public string ErrorMessage { get; private set; }
    }

We move on to start on the TextToSpeak class, where we start off by declaring some events and class members, as follows:

    public class TextToSpeak
    {
        public event EventHandler<AudioEventArgs>OnAudioAvailable;
        public event EventHandler<AudioErrorEventArgs>OnError;

        private string _gender;
        private string _voiceName;
        private string _outputFormat;
        private string _authorizationToken;
        private AccessTokenInfo _token;

        private List<KeyValuePair<string, string>> _headers = new  List<KeyValuePair<string, string>>();

The first two lines in the class are events that use the event argument classes that we created earlier. These events will be triggered if a call to the API finishes (returning some audio), or if anything fails. The next few lines are string variables, which we will use as input parameters. We have one line to contain our access token information. The last line creates a new list, which we will use to hold our request headers.

We add two constant strings to our class, as follows:

private const string RequestUri =  "https://speech.platform.bing.com/synthesize";

private const string SsmlTemplate =
    "<speak version='1.0'xml:lang='en-US'>
        <voice xml:lang='en-US'xml:gender='{0}'
        name='{1}'>{2}
        </voice>
    </speak>";

The first string contains the request URI. That is the REST API endpoint that we need to call to execute our request. Next, we have a string defining our Speech Synthesis Markup Language (SSML) template. This is where we will specify what the speech service should say, and how it should say it.

Next, we create our constructor, as follows:

        public TextToSpeak()
        {
            _gender = "Female";
            _outputFormat = "riff-16khz-16bit-mono-pcm";
            _voiceName = "Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)";
        }

Here, we are just initializing some of the variables that we declared earlier. As you may see, we are defining the voice as female and we define it so that it uses a specific voice. In terms of gender, it can be either female or male. The voice name can be one of a long list of options. We will look more into the details of that list when we go through this API in a later chapter.

The last line specifies the output format of the audio. This will define the format and codec in use by the resultant audio stream. Again, this can be a number of varieties, which we will look into in a later chapter.

Following the constructor, there are three public methods that we will create. These will generate an authentication token and some HTTP headers, and finally execute our call to the API. Before we create these, you should add two helper methods to be able to raise our events. Call them the RaiseOnAudioAvailable and RaiseOnError methods. They should accept AudioEventArgs and AudioErrorEventArgs as parameters.

Next, add a new method called the GenerateHeaders method, as follows:

        public void GenerateHeaders()
        {
            _headers.Add(new KeyValuePair<string, string>("Content-Type", "application/ssml+xml"));
            _headers.Add(new KeyValuePair<string, string>("X-Microsoft-OutputFormat", _outputFormat));
            _headers.Add(new KeyValuePair<string, string>("Authorization", _authorizationToken));
            _headers.Add(new KeyValuePair<string, string>("X-Search-AppId", Guid.NewGuid().ToString("N")));
            _headers.Add(new KeyValuePair<string, string>("X-Search-ClientID", Guid.NewGuid().ToString("N")));
            _headers.Add(new KeyValuePair<string, string>("User-Agent", "Chapter1"));
        }

Here, we add the HTTP headers to our previously created list. These headers are required for the service to respond, and if any are missing, it will yield an HTTP/400 response. We will cover what we are using as headers in more detail later. For now, just make sure that they are present.

Following this, we want to add a new method called GenerateAuthenticationToken, as follows:

        public bool GenerateAuthenticationToken(string clientSecret)
        {
            Authentication auth = new Authentication(clientSecret);

This method accepts one string parameter, the client secret (your API key). First, we create a new object of the Authentication class, which we looked at earlier, as follows:

        try
        {
            _token = auth.Token;

            if (_token != null)
            {
                _authorizationToken = $"Bearer {_token}";

                return true;
            }
            else
            {
                RaiseOnError(new AudioErrorEventArgs("Failed to generate authentication token."));
                return false;
            }
        }

We use the authentication object to retrieve an access token. This token is used in our authorization token string, which, as we saw earlier, is being passed on in our headers. If the application for some reason fails to generate the access token, we trigger an error event.

Finish this method by adding the associated catch clause. If any exceptions occur, we want to raise a new error event.

The last method that we need to create in this class is going to be called the SpeakAsync method, as shown in the following screenshot. This method will actually perform the request to the Speech API:

        public Task SpeakAsync(string textToSpeak, CancellationTokencancellationToken)
        {
            varcookieContainer = new CookieContainer();
            var handler = new HttpClientHandler() {
                CookieContainer = cookieContainer
            };
            var client = new HttpClient(handler);

The method takes two parameters. One is the string, which will be the text that we want to be spoken. The next is cancellationToken; this can be used to propagate the command that the given operation should be cancelled.

When entering the method, we create three objects that we will use to execute the request. These are classes from the .NET library. We will not be going through them in any more detail.

We generated some headers earlier, and we need to add these to our HTTP client. We do this by adding the headers in the preceding foreach loop, basically looping through the entire list, as shown in the following code:

            foreach(var header in _headers)
            {
                client.DefaultRequestHeaders.TryAddWithoutValidation (header.Key, header.Value);
            }

Next, we create an HTTP Request Message, specifying the request URI and the fact that we will send data through the POST method. We also specify the content using the SSML template that we created earlier, adding the correct parameters (gender, voice name, and the text we want to be spoken), as shown in the following code:

            var request = new HttpRequestMessage(HttpMethod.Post, RequestUri)
            {
                Content = new StringContent(string.Format(SsmlTemplate, _gender, _voiceName, textToSpeak))
            };

We use the HTTP client to send the HTTP request asynchronously, as follows:

            var httpTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, cancellationToken);

The following code is a continuation of the asynchronous send call that we made previously. This will run asynchronously as well, and check the status of the response. If the response is successful, it will read the response message as a stream and trigger the audio event. If everything succeeds, then that stream should contain our text in spoken words:

    var saveTask = httpTask.ContinueWith(async (responseMessage, token) =>
    {
        try
        {
            if (responseMessage.IsCompleted &&
                responseMessage.Result != null &&  
                responseMessage.Result.IsSuccessStatusCode) {
                var httpStream = await responseMessage. Result.Content.ReadAsStreamAsync().ConfigureAwait(false);
                RaiseOnAudioAvailable(new AudioEventArgs (httpStream));
            } else {
                RaiseOnError(new AudioErrorEventArgs($"Service returned {responseMessage.Result.StatusCode}"));
            }
        }
        catch(Exception e)
        {
            RaiseOnError(new AudioErrorEventArgs (e.GetBaseException().Message));
        }
    }

If the response indicates anything other than success, we will raise the error event.

We also want to add a catch clause and a finally clause to this. Raise an error if an exception is caught and dispose of all objects used in the finally clause.

The final code we need specifies that the continuation task is attached to the parent task. We also need to add cancellationToken to this task. Add the following code to finish off the method:

    }, TaskContinuationOptions.AttachedToParent, cancellationToken);
    return saveTask;
}

With this in place, we are now able to utilize this class in our application. Open the MainViewModel.cs file and declare a new class variable, as follows:

        private TextToSpeak _textToSpeak;

Add the following code in the constructor to initialize the newly added object. We also need to call a function to generate the authentication token, as follows:

            _textToSpeak = new TextToSpeak();
            _textToSpeak.OnAudioAvailable +=  _textToSpeak_OnAudioAvailable;
            _textToSpeak.OnError += _textToSpeak_OnError;

            GenerateToken();

After we have created the object, we hook up the two events to event handlers. Then we generate an authentication token by creating a GenerateToken function with the following content:

public async void GenerateToken()
{
    if (await _textToSpeak.GenerateAuthenticationToken("BING_SPEECH_API_KEY_HERE"))
        _textToSpeak.GenerateHeaders();
}

Then we generate an authentication token, specifying the API key for the Bing Speech API. If that call succeeds, we generate the HTTP headers required.

We need to add the event handlers, so create the _textToSpeak_OnError method first, as follows:

            private void _textToSpeak_OnError(object sender, AudioErrorEventArgs e)
            {
                StatusText = $"Status: Audio service failed -  {e.ErrorMessage}";
            }

It should be a rather simple method, just outputting the error message to the user in the status text field.

Next, we need to create a _textToSpeak_OnAudioAvailable method, as follows:

        private void _textToSpeak_OnAudioAvailable(object sender, AudioEventArgs e)
        {
            SoundPlayer player = new SoundPlayer(e.EventData);
            player.Play();
            e.EventData.Dispose();
        }

Here, we utilize the SoundPlayer class from the .NET framework. This allows us to add the stream data directly and simply play the message.

The last part that we need for everything to work is to make the call to the SpeakAsync method. We can make this by adding the following at the end of our DetectFace method:

    await _textToSpeak.SpeakAsync(textToSpeak, CancellationToken.None);

With that in place, you should now be able to compile and run the application. By loading a photo and clicking on Detect face, you should be able to get the number of faces in the image spoken back to you. Just remember to have your audio turned on!

Summary

This chapter was a brief introduction to Microsoft Cognitive Services. We started off by creating a template project to easily create new projects for the coming chapters. We tried this template by creating an example project for this chapter. Then you learned how to detect faces in images by utilizing the Face API. From there, we took a quick tour of what Cognitive Services has to offer. We finished off by adding text-to-speech capabilities to our application by using the Bing Speech API.

The next chapter will go into more detail of the vision part of the APIs. There, you will learn how to analyze images using the computer vision API. You will go into more detail about the Face API and will learn how to detect emotions in faces by using the emotion API. We will use some of this to start building our smart-house application.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build applications with computer vision, speech recognition, and language processing capabilities
  • Process and analyze data in the form of text, images, and videos
  • Build smarter applications in Visual Studio using real-world examples

Description

Microsoft Cognitive Services is a set of APIs for integrating artificial intelligence in your applications to solve logical business problems. If you’re new to developing applications with AI, Learning Microsoft Cognitive Services will give you a comprehensive introduction to Microsoft’s AI stack and get you up-to-speed in no time. The book introduces you to 24 APIs, including Emotion, Language, Vision, Speech, Knowledge, and Search. Using Visual Studio, you can develop applications with enhanced capabilities for image processing, speech recognition, text processing, and much more. Moving forward, you will work with datasets that enable your applications to process various data in the form of image, video, or text. By the end of the book, you’ll be able to confidently explore Cognitive Services APIs for building intelligent applications that can be deployed for real-world business uses.

Who is this book for?

If you’re a developer or machine learning enthusiast who wants to get started with building intelligent applications, this book is for you. Though you’re not expected to have much programming experience, some knowledge of .NET and Visual Studio will help you undertake the tasks explained in this book easily.

What you will learn

  • Identify a person through visual and audio inspection
  • Reduce user effort by utilizing AI capabilities
  • Understand how to analyze images and text in different ways
  • Add video and image analysis to applications using Vision APIs
  • Use the Search API to find anything you want from your database
  • Analyze text to extract information and explore text structure

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 27, 2018
Length: 312 pages
Edition : 3rd
Language : English
ISBN-13 : 9781789803686
Vendor :
Microsoft
Category :
Languages :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Billing Address

Product Details

Publication date : Sep 27, 2018
Length: 312 pages
Edition : 3rd
Language : English
ISBN-13 : 9781789803686
Vendor :
Microsoft
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₱260 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just ₱260 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 6,430.97
Learning Microsoft Cognitive Services
₱2500.99
Hands-On Neural Network Programming with C#
₱2245.99
Google Cloud AI Services Quick Start Guide
₱1683.99
Total 6,430.97 Stars icon
Banner background image

Table of Contents

13 Chapters
1. Getting Started with Microsoft Cognitive Services Chevron down icon Chevron up icon
2. Analyzing Images to Recognize a Face Chevron down icon Chevron up icon
3. Analyzing Videos Chevron down icon Chevron up icon
4. Letting Applications Understand Commands Chevron down icon Chevron up icon
5. Speaking with Your Application Chevron down icon Chevron up icon
6. Understanding Text Chevron down icon Chevron up icon
7. Building Recommendation Systems for Businesses Chevron down icon Chevron up icon
8. Querying Structured Data in a Natural Way Chevron down icon Chevron up icon
9. Adding Specialized Searches Chevron down icon Chevron up icon
10. Connecting the Pieces Chevron down icon Chevron up icon
A. LUIS Entities Chevron down icon Chevron up icon
B. License Information Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.