Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Learning Microsoft Cognitive Services
Learning Microsoft Cognitive Services

Learning Microsoft Cognitive Services: Create intelligent apps with vision, speech, language, and knowledge capabilities using Microsoft Cognitive Services

Arrow left icon
Profile Icon Leif Larsen Henning Larsen
Arrow right icon
€18.99 per month
Paperback Mar 2017 372 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Leif Larsen Henning Larsen
Arrow right icon
€18.99 per month
Paperback Mar 2017 372 pages 1st Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Learning Microsoft Cognitive Services

Chapter 1. Getting Started with Microsoft Cognitive Services

You have just started on the road to learn about Microsoft Cognitive Services. This chapter will serve as a gentle introduction to the services. The end goal is to understand a bit more about what these cognitive APIs can do for you. By the end of this chapter, we will have created an easy-to-use project template. You will have learned how to detect faces in images, and have the number of faces spoken back to you.

Throughout this chapter, we will cover the following topics:

  • Learning about some applications already using Microsoft Cognitive Services
  • Creating a template project
  • Detecting faces in images using Face API
  • Discovering what Microsoft Cognitive Services can offer
  • Doing text-to-speech conversion using Bing Speech API

Cognitive Services in action for fun and life changing purposes

The best way to introduce Microsoft Cognitive Services is to see how it can be used in action. Microsoft, and others, has created a lot of example applications, to show off the capabilities. Several may be seen as silly, such as the How-Old.net (http://how-old.net/) image analysis and the what if I were that person application. These applications have generated quite some buzz, and they show off some of the APIs in a good way.

The one demonstration that is truly inspiring though, is the one featuring a visually impaired person. Talking computers inspired him to create an application to allow blind and visually impaired people to understand what is going on around them. The application has been built upon Microsoft Cognitive Services. It gives a good idea of how the APIs can be used to change the world, for the better. Before moving on, head over to https://www.youtube.com/watch?v=R2mC-NUAmMk and take a peek into the world of Microsoft Cognitive Services.

Setting up boilerplate code

Before we start diving in to the action, we will go through some setup. More to the point, we will set up some boilerplate code, which we will utilize throughout this book.

To get started, you will need to install a version of Visual Studio, preferably Visual Studio 2015 or higher. The Community Edition will work fine for this purpose. You do not need anything more than what the default installation offers.

Throughout this book, we will utilize the different APIs to build a smart house application. The application will be created to see how one can imagine a futuristic house to be. If you have seen the Iron Man movies, you can think of the application as resembling Jarvis, in some ways.

In addition, we will be doing smaller sample applications using the cognitive APIs. Doing so will allow us to cover each API, even those that did not make it to the final application.

What's common with all the applications that we will build is that they will be Windows Presentation Foundation (WPF) applications. This is fairly well known, and allows us to build applications using the Model View ViewModel (MVVM) pattern. One of the advantages of taking this road is that we will be able to see the API usage quite clearly. It also separates code, so that you can bring the API logic to other applications with ease.

The following steps describe the process of creating a new WPF project:

  1. Open Visual Studio and select File | New | Project.
  2. In the dialog, select the WPF Application option from Templates | Visual C# as shown in the following screenshot:

    Setting up boilerplate code

  3. Delete the MainWindow.xaml file, and create files and folders matching the following image:

    Setting up boilerplate code

We will not go through the MVVM pattern in detail, as this is out of scope of this book. The key takeaway from the image is that we have separated the View from what becomes the logic. We then rely on the ViewModel to connect the pieces.

Note

If you want to learn more about MVVM, I recommend reading an article from CodeProject at http://www.codeproject.com/Articles/100175/Model-View-ViewModel-MVVM-Explained.

To be able to run this, we do, however, need to cover some of the details in the project:

  1. Open the App.xaml file and make sure StartupUri is set to the correct View, like this (class name and namespace may vary based on the name of your application):
            <Application x:Class="Chapter1.App"
                xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
                xmlns:x = "http://schemas.microsoft.com/winfx/2006/xaml" 
                xmlns:local="clr-namespace:Chapter1" 
                StartupUri="View/MainView.xaml"> 
    
  2. Open the MainViewModel.cs file and make it inherit from the ObservableObject class.
  3. Open the MainView.xaml file and add the MainViewModel file as datacontext to it, like this (namespace and class names may vary based on the name of your application):
            <Window x:Class="Chapter1.View.MainView" 
               
                xmlns="http://schemas.microsoft.com/
    winfx/2006/xaml/presentation"
                xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
                xmlns:d="http://schemas.microsoft.com/
    expression/blend/2008" 
                xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
                xmlns:local="clr-namespace:Chapter1.View" 
                xmlns:viewmodel="clr-namespace:Chapter1.ViewModel" mc:Ignorable="d" 
                Title="Chapter 1" Height="300" Width="300"> 
                <Window.DataContext> 
                    <viewmodel:MainViewModel /> 
                </Window.DataContext> 
    

Following this we need to fill in the content of the ObservableObject.cs file. We start off by having it inherit from the INotifyPropertyChanged class:

        public class ObservableObject : INotifyPropertyChanged 

This is a rather small class, which should contain the following:

        public event PropertyChangedEventHandlerPropertyChanged; 
        protected void RaisePropertyChangedEvent(string propertyName) 
        { 
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName)); 
        } 

We declare a property changed event, and create a function to raise the event. This will allow the User Interface (UI) to update its values when a given property has changed.

We also need to be able to execute actions when buttons are pressed. This can be achieved when we put some content into the DelegateCommand.cs file. Start by making the class inherit the ICommand class, and declare two variables:

        public class DelegateCommand : ICommand 
        {
            private readonly Predicate<object> _canExecute; 
            private readonly Action<object> _execute; 

The two variables we have created will be set in the constructor. As you will notice, you are not required to add the _canExecute parameter, and you will see why in a bit:

            public DelegateCommand(Action<object> execute, Predicate<object> canExecute = null) 
            { 
                _execute = execute; 
                _canExecute = canExecute; 
            } 

To complete the class, we add two public functions and one public event:

            public bool CanExecute(object parameter) 
            { 
                if (_canExecute == null) return true; 
                return _canExecute(parameter); 
            } 
 
            public void Execute(object parameter) 
            { 
                _execute(parameter); 
            } 
    
        public event EventHandlerCanExecuteChanged 
        { 
            add 
            { 
                CommandManager.RequerySuggested += value; 
            } 
            remove 
            {
                CommandManager.RequerySuggested -= value; 
            } 
        } 
    } 

The functions declared will return the corresponding predicate or action, declared in the constructor. This will be something we declare in our ViewModels, which in turn will be something that executes an action or tells the application that it can or cannot execute an action. If a button is in a state where it is disabled (the CanExecute function returns false) and the state of the CanExecute function changes, the event declared will let the button know.

With that in place you should be able to compile and run the application, so go on and try that. You will notice that the application does not actually do anything or present any data yet, but we have an excellent starting point.

Before we do anything else with the code, we are going to export the project as a template. This is so we do not have to redo all these steps for each small sample project we create:

  1. Replace namespace names with substitute parameters:
    1. In all the .cs files replace the namespace name with $safeprojectname$.
    2. In all the .xaml files replace the project name with $safeprojectname$ where applicable (typically class name and namespace declarations).
  2. Navigate to File | Export Template. This will open the Export Template wizard:

    Setting up boilerplate code

  3. Click the Project Template button. Select the project we just created and click on the Next button.
  4. Just leave the icon and preview image empty. Enter a recognizable name and description. Click on the Finish button:

    Setting up boilerplate code

  5. The template is now exported to a zip file and stored in the specified location.

By default, the template will be imported into Visual Studio again. We are going to test that it works immediately by creating a project for this chapter. So go ahead and create a new project, selecting the template we just created. The template should be listed in the Visual C# section of the installed templates list. Call the project Chapter1 or something else if you prefer. Make sure it compiles and you are able to run it before we move to the next step.

Detecting faces with the Face API

With the newly created project we will now try our first API, the Face API. We will not be doing a whole lot, but we will see how simple it is to detect faces in images.

The steps we need to cover to do this are as follows:

  1. Register for a free Face API preview subscription.
  2. Add the necessary NuGet packages to our project.
  3. Add some UI to the application.
  4. Detect faces on command.

Head over to https://www.microsoft.com/cognitive-services/en-us/face-api to start the process of registering for a free subscription to the Face API. By clicking the yellow button, stating Get started for free, you will be taken to a login page. Log on with your Microsoft account, or if you do not have one, register for one.

Once logged in, you will need to verify that the Face API Preview has been selected in the list, and accept the terms and conditions. With that out of the way, you will be presented with the following:

Detecting faces with the Face API

You will need one of the two keys later when we are accessing the API.

This is where we will be accessing all our API keys throughout this book. It means that you can already register for more, but we will do this for each new API we add.

Some of the APIs that we will cover have their own NuGet packages created. Whenever this is the case, we will utilize those packages to do the operations we want to do. Common for all APIs is that they are REST APIs, which means in practice you can use them with whichever language you want. For those APIs that do not have their own NuGet package, we call the APIs directly through HTTP.

For the Face API we are using now, a NuGet package does exist, so we need to add that to our project. Head over to the NuGet Package Manager option for the project we created earlier. In the Browse tab search for the Microsoft.ProjectOxford.Face package and install the package from Microsoft:

Detecting faces with the Face API

As you will notice, another package will also be installed. This is the Newtonsoft.Json package, which is required by the Face API.

The next step is to add some UI to our application. We will be adding this in the MainView.xaml file. Open this file where the template code we created earlier should be. This means we have a datacontext, and can make bindings for our elements, which we will define now.

First we add a grid and define some rows for the grid:

    <Grid> 
        <Grid.RowDefinitions> 
            <RowDefinition Height="*" /> 
            <RowDefinition Height="20" /> 
            <RowDefinition Height="30" /> 
        </Grid.RowDefinitions> 

Three rows are defined. The first is a row where we will have an image. The second is a line for status message, and the last is where we will place some buttons.

Next we add our image element:

        <Image x:Name="FaceImage" Stretch="Uniform" Source="{Binding ImageSource}" Grid.Row="0" /> 

We have given it a unique name. By setting the Stretch parameter to Uniform, we ensure that the image keeps its aspect ratio. Further on, we place this element in the first row. Last, we bind the image source to a BitmapImage in the ViewModel, which we will look at in a bit.

The next row will contain a text block with some status text. The Text property will be bound to a string property in the ViewModel:

        <TextBlockx:Name="StatusTextBlock" Text="{Binding StatusText}" Grid.Row="1" /> 

The last row will contain one button to browse for an image and one button to be able to detect faces. The command properties of both buttons will be bound to the DelegateCommand properties in the ViewModel:

        <Button x:Name="BrowseButton" Content="Browse" Height="20" Width="140" HorizontalAlignment="Left" Command="{Binding BrowseButtonCommand}" Margin="5, 0, 0, 5"Grid.Row="2" /> 
 
        <Button x:Name="DetectFaceButton" Content="Detect face" Height="20" Width="140" HorizontalAlignment="Right" Command="{Binding DetectFaceCommand}" Margin="0, 0, 5, 5"Grid.Row="2"/> 

With the View in place, make sure the code compiles and run it. This should present you with the following UI:

Detecting faces with the Face API

The last part is to create the binding properties in our ViewModel, and make the buttons execute something. Open the MainViewModel.cs file. The class should already inherit from the ObservableObject class. First we define two variables:

    private string _filePath; 
    private IFaceServiceClient _faceServiceClient; 

The string variable will hold the path to our image, while the IFaceServiceClient variable is to interface the Face API. Next, we define two properties:

    private BitmapImage _imageSource; 
    public BitmapImageImageSource 
    { 
        get { return _imageSource; } 
        set 
        { 
            _imageSource = value; 
            RaisePropertyChangedEvent("ImageSource"); 
        } 
    } 
 
    private string _statusText; 
    public string StatusText 
    { 
        get { return _statusText; } 
        set 
        { 
           _statusText = value; 
           RaisePropertyChangedEvent("StatusText"); 
        } 
    } 

What we have here is a property for the BitmapImage, mapped to the Image element in the View. We also have a string property for the status text, mapped to the text block element in the View. As you also may notice, when either of the properties is set, we call the RaisePropertyChangedEvent event. This will ensure that the UI updates when either property has new values.

Next we define our two DelegateCommand objects, and do some initialization through the constructor:

    public ICommandBrowseButtonCommand { get; private set; } 
    public ICommandDetectFaceCommand { get; private set; } 
 
    public MainViewModel() 
    { 
        StatusText = "Status: Waiting for image..."; 
 
        _faceServiceClient = new FaceServiceClient("YOUR_API_KEY_HERE"); 
 
        BrowseButtonCommand = new DelegateCommand(Browse); 
        DetectFaceCommand = new DelegateCommand(DetectFace, CanDetectFace); 
    } 

The properties for the commands are both public to get, but private to set. This means we can only set them from within the ViewModel. In our constructor we start off by setting the status text. Next we create an object of the Face API, which needs to be created with the API key we got earlier.

At last we create the DelegateCommand constructor for our command properties. Notice how the browse command does not specify a predicate. This means it will always be possible to click the corresponding button. To make this compile, we need to create the functions specified in the DelegateCommand constructors: the Browse, DetectFace, and CanDetectFace functions:

    private void Browse(object obj) 
    { 
        var openDialog = new Microsoft.Win32.OpenFileDialog(); 
 
        openDialog.Filter = "JPEG Image(*.jpg)|*.jpg"; 
        bool? result = openDialog.ShowDialog(); 
 
        if (!(bool)result) return; 

We start the Browse function by creating an OpenFileDialog object. This dialog is assigned a filter for JPEG images, and in turn it is opened. When the dialog is closed we check the result. If the dialog was cancelled, we simply stop further execution:

        _filePath = openDialog.FileName; 
        Uri fileUri = new Uri(_filePath); 

With the dialog closed, we grab the filename of the file selected, and create a new URI from it:

        BitmapImage image = new BitmapImage(fileUri); 
 
        image.CacheOption = BitmapCacheOption.None; 
        image.UriSource = fileUri; 

With the newly created URI we want to create a new BitmapImage. We specify it to use no cache, and we set the URI source of the URI we created:

        ImageSource = image; 
        StatusText = "Status: Image loaded..."; 
    } 

The last step we take is to assign the bitmap image to our BitmapImage property, so the image is shown in the UI. We also update the status text to let the user know the image has been loaded.

Before we move on, it is time to make sure the code compiles, and that you are able to load an image into the View:

    private boolCanDetectFace(object obj) 
    { 
        return !string.IsNullOrEmpty(ImageSource?.UriSource.ToString()); 
    } 

The CanDetectFace function checks whether or not the DetectFacesButton button should be enabled. In this case, it checks if our image property actually has a URI. If it does, by extension that means we have an image, and we should be able to detect faces:

    private async void DetectFace(object obj) 
    { 
        FaceRectangle[] faceRects = await UploadAndDetectFacesAsync(); 
 
        string textToSpeak = "No faces detected"; 
 
        if (faceRects.Length == 1) 
            textToSpeak = "1 face detected"; 
        else if (faceRects.Length> 1) 
            textToSpeak = $"{faceRects.Length} faces detected"; 
 
        Debug.WriteLine(textToSpeak); 
    } 

Our DetectFace method calls an async method to upload and detect faces. The return value contains an array of the FaceRectangles variable. This array contains the rectangle area for all face positions in the given image. We will look into the function we call in a bit.

After the call has finished executing we print a line with the number of faces to the debug console window:

    private async Task<FaceRectangle[]>UploadAndDetectFacesAsync() 
    { 
        StatusText = "Status: Detecting faces..."; 
 
        try 
        { 
            using (Stream imageFileStream = File.OpenRead(_filePath)) 

In the UploadAndDetectFacesAsync function we create a Stream from the image. This stream will be used as input for the actual call to the Face API service:

            Face[] faces = await _faceServiceClient.DetectAsync(imageFileStream, true, true, new List<FaceAttributeType>() { FaceAttributeType.Age }); 

This line is the actual call to the detection endpoint for the Face API. The first parameter is the file stream we created in the previous step. The rest of the parameters are all optional. The second parameter should be true if you want to get a face ID. The next parameter specifies if you want to receive face landmarks or not. The last parameter takes a list of facial attributes you may want to receive. In our case, we want the age parameter to be returned, so we need to specify that.

The return type of this function call is an array of faces, with all the parameters you have specified:

                List<double> ages = faces.Select(face =>face.FaceAttributes.Age).ToList(); 
                FaceRectangle[] faceRects = faces.Select(face =>face.FaceRectangle).ToArray(); 
 
                StatusText = "Status: Finished detecting faces..."; 
 
                foreach(var age in ages) 
                { 
                    Console.WriteLine(age); 
                } 
                return faceRects; 
            } 
        } 

The first line iterates over all faces and retrieves the approximate age of all faces. This is later printed to the debug console window, in the following foreach loop.

The second line iterates over all faces and retrieves the face rectangle, with the rectangular location of all faces. This is the data we return to the calling function.

Add a catch clause to finish the method. In case an exception is thrown in our API call, we catch that. You want to show the error message and return an empty FaceRectangle array.

With that code in place, you should now be able to run the full example. The end result will look like the following screenshot:

Detecting faces with the Face API

The resulting debug console window will print the following text:

    1 face detected 
    23,7 

Overview of what we are dealing with

Now that you have seen a basic example of how to detect faces, it is time to learn a bit about what else Cognitive Services can do for you. When using Cognitive Services, you have 21 different APIs to hand. These are in turn separated into five top-level domains according to what they do. They are vision, speech, language, knowledge, and search. Let's look at more about them in the following sections.

Vision

APIs under the Vision flags allows your apps to understand images and video content. It allows you to retrieve information about faces, feelings, and other visual content. You can stabilize videos and recognize celebrities. You can read text in images and generate thumbnails from videos and images.

There are four APIs contained in the Vision area, which we will look at now.

Computer Vision

Using the Computer Vision API, you can retrieve actionable information from images. This means you can identify content (such as image format, image size, colors, faces, and more). You can detect whether or not an image is adult/racy. This API can recognize text in images and extract it to machine-readable words. It can detect celebrities from a variety of areas. Lastly it can generate storage-efficient thumbnails with smart cropping functionality.

We will look into Computer Vision in Chapter 2, Analyzing Images to Recognize a Face.

Emotion

The Emotion API allows you to recognize emotions, both in images and videos. This can allow for more personalized experiences in applications. Emotions detected are cross-cultural emotions: anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise.

We will cover Emotion API over two chapters, Chapter 2, Analyzing Images to Recognize a Face, for image-based emotions, and Chapter 3, Analyzing Videos, for video-based emotions.

Face

We have already seen the very basic example of what the Face API can do. The rest of the API revolves around the same, to detect, identify, organize, and tag faces in photos. Apart from face detection, you can see how likely it is that two faces belong to the same person. You can identify faces and also find similar-looking faces.

We will dive further into Face API in Chapter 2, Analyzing Images to Recognize a Face.

Video

The Video API is about the analyzing, editing, and processing of videos in your app. If you have a video that is shaky, the API allows you to stabilize it. You can detect and track faces in videos. If a video contains a stationary background, you can detect motion. The API lets you generate thumbnail summaries for videos, which allows users to see previews or snapshots quickly.

Video will be covered throughout Chapter 3, Analyzing Videos.

Speech

Adding one of the Speech APIs allows your application to hear and speak to your users. The APIs can filter noise and identify speakers. They can drive further actions in your application, based on the recognized intent.

Speech contains three APIs that are discussed as follows.

Bing Speech

Adding the Bing Speech API to your application allows you to convert speech to text and vice versa. You can convert spoken audio to text, either by utilizing a microphone or other sources in real-time, or by converting audio from files. The API also offers speech intent recognition, which is trained by Language Understanding Intelligent Service (LUIS) to understand the intent.

Speaker Recognition

The Speaker Recognition API gives your application the ability to know who is talking. By using this API, you can verify that the person speaking is who they claim to be. You can also determine who an unknown speaker is, based on a group of selected speakers.

Custom Recognition

To improve speech recognition, you can use the Custom Recognition API. This allows you to fine-tune speech recognition operations for anyone, anywhere. By using this API, the speech recognition model can be tailored to the vocabulary and speaking style of the user. In addition, the model can be customized to match the expected environment of the application.

We will cover all Speech related APIs in Chapter 5, Speak with Your Application.

Language

APIs related to language allows your application to process natural language and learn how to recognize what users want. You can add textual and linguistic analysis to your application, as well as natural language understanding.

The following five APIs can be found in the Language area.

Bing Spell Check

The Bing Spell Check API allows you to add advanced spell checking to your application.

This API will be covered in Chapter 6, Understanding Text.

Language Understanding Intelligent Service (LUIS)

LUIS is an API that can help your application understand commands from your users. Using this API, you can create language models that understand intents. By using models from Bing and Cortana, you can make these models recognize common requests and entities (such as places, times, and numbers). You can add conversational intelligence to your applications.

LUIS will be covered in Chapter 4, Let Applications Understand Commands.

Linguistic Analysis

The Linguistic Analysis API lets you parse complex text to explore the structure of text. By using this API you can find nouns, verbs, and more in text, which allows your application to understand who is doing what to whom.

We will see more of Linguistic Analysis in Chapter 6, Understanding Text.

Text Analysis

The Text Analysis API will help you in extracting information from text. You can find the sentiment of a text (whether the text is positive or negative). You will be able to detect language, topic, and key phrases used throughout the text.

We will also cover Text Analysis in Chapter 6, Understanding Text.

Web Language Model

By using the Web Language Model (WebLM) API you are able to leverage the power of language models trained on web-scale data. You can use this API to predict which words or sequences follow a given sequence or word.

Web Language Model API will be covered in Chapter 6, Understanding Text.

Knowledge

When talking about Knowledge APIs, we are talking about APIs that allow you to tap into rich knowledge. This may be knowledge from the web. It may be from academia or it may be your own data. Using these APIs, you will be able to explore different nuances of knowledge.

The following four APIs are contained in the Knowledge API area.

Academic

Using the Academic API, you can explore relationships among academic papers, journals, and authors. This API allows you to interpret natural language user query strings, which allow your application to anticipate what the user is typing. It will evaluate said expression and return academic knowledge entities.

This API will be covered more in Chapter 8, Query Structured Data in a Natural Way.

Entity Linking

Entity Linking is the API you would use to extend knowledge of people, places, and events based on the context. As you may know, a single word may be used differently based on the context. Using this API allows you to recognize and identify each separate entity within a paragraph, based on the context.

We will go through Entity Linking API in Chapter 7, Extending Knowledge Based on Context.

Knowledge Exploration

The Knowledge Exploration API will let you add the possibility to use interactive search for structured data in your projects. It interprets natural language queries and offers auto-completions to minimize user-effort. Based on the query expression received, it will retrieve detailed information about matching objects.

Details on this API will be covered in Chapter 8, Query Structured Data in a Natural Way.

Recommendations

The Recommendations API allows you to provide personalized product recommendations for your customers. You can use this API to add frequently bought together functionality to your application. Another feature you can add is item-to-item recommendations, which allow customers to see what other customers like. This API will also allow you to add recommendations based on the prior activity of the customer.

We will go through this API in Chapter 7, Extending Knowledge Based on Context.

Search

Search APIs give you the ability to make your applications more intelligent with the power of Bing. Using these APIs, you can use a single call to access data from billions of web pages, images, videos, and news.

The following five APIs are in the search domain.

Bing Web Search

With Bing Web Search you can search for details in billions of web documents indexed by Bing. All the results can be arranged and ordered according to a layout you specify, and the results are customized to the location of the end user.

Bing Image Search

Using the Bing Image Search API, you can add an advanced image and metadata search to your application. Results include URL to images, thumbnails, and metadata. You will also be able to get machine-generated captions, similar images, and more. This API allows you to filter the results based on image type, layout, freshness (how new the image is), and license.

Bing Video Search

Bing Video Search will allow you to search for videos and return rich results. The results contain metadata from the videos, static or motion based thumbnails, and the video itself. You can add filters to the result, based on freshness, video length, resolution, and price.

Bing News Search

If you add Bing News Search to your application, you can search for news articles. Results can include authoritative image, related news and categories, information on the provider, URL, and more. To be more specific you can filter news based on topics.

Bing Autosuggest

The Bing Autosuggest API is a small, but powerful one. It will allow your users to search faster with search suggestions, allowing you to connect a powerful search to your apps.

All Search APIs will be covered in Chapter 9, Adding Specialized Search.

Getting feedback on detected faces

Now that we have seen what else Microsoft Cognitive Services can offer, we are going to add an API to our face detection application. Through this part we will add the Bing Speech API to make the application say the number of faces out loud.

This feature of the API is not provided in the NuGet package, and as such we are going to use the REST API.

To reach our end goal we are going to add two new classes, TextToSpeak and Authentication. The first class will be in charge of generating correct headers and making the calls to our service endpoint. The latter class will be in charge of generating an authentication token. This will be tied together in our ViewModel, where we will make the application speak back to us.

We need to get our hands on an API key first. Head over to https://www.microsoft.com/cognitive-services/en-us/speech-api and click the yellow button stating Get started for free. Make sure the correct API (Bing Speech Free/Preview) is selected and accept the terms and conditions.

To be able to call the Bing Speech API, we need to have an authorization token. Go back to Visual Studio and create a new file called Authentication.cs. Place this in the Model folder.

We need to add two new references to the project. Find System.Runtime.Serialization and System.Web packages in the Assembly tab in the Add References window and add them.

In our newly created Authentication file, add a public class beneath the automatically generated class:

    [DataContract] 
    public class AccessTokenInfo 
    { 
        [DataMember] 
        public string access_token { get; set; } 
        [DataMember] 
        public string token_type { get; set; } 
        [DataMember] 
        public string expires_in { get; set; } 
        [DataMember] 
        public string scope { get; set; } 
    } 

The response for our access token request will be serialized into this class, which will be used by our text-to-speech conversion later.

In our Authentication class, define four private variables and one public property:

    private string _requestDetails; 
    private AccessTokenInfo _token; 
    private Timer _tokenRenewer; 
 
    private const int TokenRefreshInterval = 9; 
 
    public AccessTokenInfo Token { get { return _token; } } 

The constructor should accept two string parameters, clientId and clientSecret. The clientId parameter will typically be your application name, while the clientSecret parameter is the API key you signed up for.

In the constructor, assign _requestDetails, _token, and _accessTokenRenewer variables:

    _requestDetails = string.Format("grant_type=client_credentials & client_id={0} & client_secret={1} & scope={2}", 
    HttpUtility.UrlEncode(clientId), 
    HttpUtility.UrlEncode(clientSecret), 
    HttpUtility.UrlEncode("https://speech.platform.bing.com")); 
 
    _token = GetToken(); 
 
    _tokenRenewer = new Timer(new TimerCallback(OnTokenExpiredCallback), this, 
TimeSpan.FromMinutes(TokenRefreshInterval), 
TimeSpan.FromMilliseconds(-1)); 

The _requestDetails variable contains the credentials provided in the parameters. It also defines a scope, for which these are valid.

We then fetch the access token; in a method we will create shortly.

Finally, we create our timer class, which will call the callback function in 9 minutes. The callback function will need to fetch the access token again and assign it to the _token variable. It also needs to assure that we run the timer again in 9 minutes.

Next we need to create the GetToken method. This method should return an AccessTokenInfo object, and it should be declared as private:

    WebRequestwebRequest = WebRequest.Create("https://oxford-speech.cloudapp.net/token/issueToken"); 
    webRequest.ContentType = "application/x-www-form-urlencoded"; 
webRequest.Method = "POST"; 

In the method, we start by creating a web request object, pointing to an endpoint that will generate our token. We specify the content type and HTTP method:

    byte[] bytes = Encoding.ASCII.GetBytes(_requestDetails); 
webRequest.ContentLength = bytes.Length; 

We then go on to get the byte array from the _requestDetails variable that we initialized in the constructor. This will be sent with the web request:

    try 
    { 
        using (Stream outputStream = webRequest.GetRequestStream()) 
        { 
            outputStream.Write(bytes, 0, bytes.Length); 
        } 

When the request has been sent, we expect there to be a response. We want to read this response and serialize it into the AccessTokenInfo object, which we created earlier:

        using (WebResponsewebResponse = webRequest.GetResponse()) 
        { 
            DataContractJsonSerializerserializer = new DataContractJsonSerializer(typeof(AccessTokenInfo)); 
            AccessTokenInfo token = (AccessTokenInfo) serializer.ReadObject(webResponse.GetResponseStream()); 
            return token; 
        } 

Add a catch clause to handle potential errors and the authentication class is ready to be used.

Add a new file, called TextToSpeak.cs, if you have not already done so. Put this file in the Model folder.

Beneath the newly created class (but inside the namespace), we want to add two event arguments classes. These will be used to handle audio events, which we will see later:

    public class AudioEventArgs : EventArgs 
    { 
        public AudioEventArgs(Stream eventData) 
        { 
            EventData = eventData; 
        } 
 
        public StreamEventData { get; private set; }  
    } 

The AudioEventArgs class simply takes a generic stream, and you can imagine it being used to send the audio stream to our application:

    public class AudioErrorEventArgs : EventArgs 
    { 
        public AudioErrorEventArgs(string message) 
        { 
            ErrorMessage = message; 
        } 
 
        public string ErrorMessage { get; private set; } 
    } 

This next class allows us to send an event with a specific error message.

We move on to start on the TextToSpeak class, where we start off by declaring some events and class members:

    public class TextToSpeak 
    { 
        public event EventHandler<AudioEventArgs>OnAudioAvailable; 
        public event EventHandler<AudioErrorEventArgs>OnError; 
 
        private string _gender; 
        private string _voiceName; 
        private string _outputFormat; 
        private string _authorizationToken; 
        private AccessTokenInfo _token;  
 
        private List<KeyValuePair<string, string>> _headers = new  List<KeyValuePair<string, string>>(); 

The first two lines in the class are events using the event argument classes we created earlier. These events will be triggered if a call to the API finishes, and returns some audio, or if anything fails. The next few lines are string variables, which we will use as input parameters. We have one line to contain our access token information. The last line creates a new list, which we will use to hold our request headers.

We add two constant strings to our class:

        private const string RequestUri =  "https://speech.platform.bing.com/synthesize"; 
        private const string SsmlTemplate = "<speak version='1.0'xml:lang='en-US'><voice xml:lang='en-US'xml:gender='{0}' name='{1}'>{2}</voice></speak>";

The first string contains the request URI. That is the REST API endpoint we need to call to execute our request. Next we have a string defining our Speech Synthesis Markup Language (SSML) template. This is where we will specify what the Speech service should say, and a bit on how it should say it.

Next we create our constructor:

        public TextToSpeak() 
        { 
            _gender = "Female"; 
            _outputFormat = "riff-16khz-16bit-mono-pcm"; 
            _voiceName = "Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)"; 
        } 

Here we are just initializing some of our variables, declared earlier. As you may see, we are defining the voice to be female and we define it to use a specific voice. In terms of gender, naturally it can be either female or male. In terms of voice name, it can be one of a long list of options. We will look more into the details of that list when we go through this API in a later chapter.

The last line specifies the output format of the audio. This will define the format and codec in use by the resulting audio stream. Again this can be a number of varieties, which we will look into in a later chapter.

Following the constructor, there are three public methods we will create. These will generate an authentication token, generate some HTTP headers, and finally execute our call to the API. Before we create these, you should add two helper methods to be able to raise our events. Call them the RaiseOnAudioAvailable and RaiseOnError methods. They should accept AudioEventArgs and AudioErrorEventArgs as parameters.

Next, add a new method called the GenerateHeaders method:

        public void GenerateHeaders() 
        { 
            _headers.Add(new KeyValuePair<string, string>("Content-Type", "application/ssml+xml")); 
            _headers.Add(new KeyValuePair<string, string>("X-Microsoft-OutputFormat", _outputFormat)); 
            _headers.Add(new KeyValuePair<string, string>("Authorization", _authorizationToken)); 
            _headers.Add(new KeyValuePair<string, string>("X-Search-AppId", Guid.NewGuid().ToString("N"))); 
            _headers.Add(new KeyValuePair<string, string>("X-Search-ClientID", Guid.NewGuid().ToString("N"))); 
            _headers.Add(new KeyValuePair<string, string>("User-Agent", "Chapter1")); 
        } 

Here we add the HTTP headers, to our previously created list. These headers are required for the service to respond, and if any is missing it will yield an HTTP/400 response. What we are using as headers is something we will cover in more detail later. For now, just make sure they are present.

Following this we want to add a new method called GenerateAuthenticationToken:

        public bool GenerateAuthenticationToken(string clientId, string clientSecret) 
        { 
            Authentication auth = new Authentication(clientId, clientSecret); 

This method accepts two string parameters, one ID for the client (typically your application name) and one client secret (your API key). First we create a new object of the Authentication class, which we looked at earlier:

        try 
        { 
            _token = auth.Token; 
 
            if (_token != null) 
            { 
                _authorizationToken = $"Bearer {_token.access_token}"; 
 
                return true; 
            } 
            else 
            { 
                RaiseOnError(new AudioErrorEventArgs("Failed to generate authentication token.")); 
                return false; 
            } 
        } 

We use the authentication object to retrieve an access token. This token is used in our authorization token string, which, as we saw earlier, is being passed on in our headers. If the application for some reason fails to generate the access token, we trigger an error event.

Finish this method by adding the associated catch clause. If any exceptions occur, we want to raise a new error event.

The last method we need to create in this class we are going to call the SpeakAsync method. This will be the method that actually performs the request to the Speech API:

        public Task SpeakAsync(string textToSpeak, CancellationTokencancellationToken) 
        { 
            varcookieContainer = new CookieContainer(); 
            var handler = new HttpClientHandler() { CookieContainer = cookieContainer }; 
            var client = new HttpClient(handler);  

The method takes two parameters. One string, which will be the text we want to be spoken. The next is a cancellation token. This can be used to propagate that the given operation should be cancelled.

When entering the method, we create three objects, which we will use to execute the request. These are classes from the .NET library, and we will not be going through them in any more detail:

            foreach(var header in _headers) 
            { 
                client.DefaultRequestHeaders.TryAddWithoutValidation (header.Key, header.Value); 
            } 

We generated some headers earlier and we need to add these to our HTTP client. We do this by adding the headers in the preceding foreach loop, basically looping through the entire list:

            var request = new HttpRequestMessage(HttpMethod.Post, RequestUri) 
            { 
                Content = new StringContent(string.Format(SsmlTemplate, _gender, _voiceName, textToSpeak)) 
            }; 

Next we create an HTTP Request Message, specifying that we will send data through the POST method, and specifying the request URI. We also specify the content, using the SSML template we created earlier and adding the correct parameters (gender, voice name, and the text we want to be spoken):

            var httpTask = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, cancellationToken); 

We use the HTTP client to send the HTTP request asynchronously:

            var saveTask = httpTask.ContinueWith(async (responseMessage, token) => 
            { 
                try 
                { 
                    if(responseMessage.IsCompleted && responseMessage.Result != null && responseMessage.Result.IsSuccessStatusCode) 
                    { 
                        var httpStream = await responseMessage. Result.Content.ReadAsStreamAsync().ConfigureAwait(false); 
                         RaiseOnAudioAvailable(new AudioEventArgs (httpStream)); 
                     } 
                     else 
                     { 
                         RaiseOnError(new AudioErrorEventArgs($"Service returned {responseMessage.Result.StatusCode}")); 
                      } 
                  } 
                  catch(Exception e) 
                  { 
                      RaiseOnError(new AudioErrorEventArgs (e.GetBaseException().Message)); 
                   } 
               } 

The preceding code is a continuation of the asynchronous send call we made previously. This will run asynchronously as well, and check the status of the response. If the response is successful, it will read the response message as a stream, and trigger the audio event. If everything succeeded, then that stream should contain our text in spoken words.

If the response indicates anything else than success, we will raise the error event.

We also want to add a catch clause, as well as a finally clause to this. Raise an error if an exception is caught, and dispose of all objects used in the finally clause.

The final code we need is to specify that the continuation task is attached to the parent task. Also we need to add the cancellation token to this task as well. Go on to add the following code to finish off the method:

            }, TaskContinuationOptions.AttachedToParent, cancellationToken); 
 
            return saveTask; 
        } 

With that in place we are now able to utilize this class in our application, and we are going to do that now. Open the MainViewModel.cs file and declare a new class variable:

        private TextToSpeak _textToSpeak; 

Add the following code in the constructor, to initialize the newly added object:

            _textToSpeak = new TextToSpeak(); 
            _textToSpeak.OnAudioAvailable +=  _textToSpeak_OnAudioAvailable; 
            _textToSpeak.OnError += _textToSpeak_OnError; 
 
            if (_textToSpeak.GenerateAuthenticationToken("Chapter1", "API_KEY_HERE")) 
                _textToSpeak.GenerateHeaders(); 

After we have created the object, we hook up the two events to event handlers. Then we generate an authentication token, specifying the application name and the API key for the Bing Speech API. If that call succeeds, we generate the HTTP headers required.

We need to add the event handlers, so create the method called _textToSpeak_OnError first:

            private void _textToSpeak_OnError(object sender, AudioErrorEventArgs e) 
            { 
                StatusText = $"Status: Audio service failed -  {e.ErrorMessage}"; 
            } 

It should be rather simple, just to output the error message to the user, in the status text field.

Next, we need to create a _textToSpeak_OnAudioAvailable method:

        private void _textToSpeak_OnAudioAvailable(object sender, AudioEventArgs e) 
        { 
            SoundPlayer player = new SoundPlayer(e.EventData); 
            player.Play(); 
            e.EventData.Dispose(); 
        } 

Here we utilize the SoundPlayer class from the .NET framework. This allows us to add the stream data directly and simply play the message.

The last part we need for everything to work is to make the call to the SpeakAsync method. We can make that by adding the following at the end of our DetectFace method:

    await _textToSpeak.SpeakAsync(textToSpeak, CancellationToken.None); 

With that in place you should now be able to compile and run the application. By loading a photo and clicking Detect face, you should be able to get the number of faces spoken back to you. Just remember to have audio on!

Summary

Throughout this chapter we got a brief introduction to Microsoft Cognitive Services. We started off by creating a template project, to easily create new projects for the coming chapters. We tried this template by creating an example project for this chapter. Then we learned how to detect faces in images, utilizing the Face API. From there we took a quick tour of what Cognitive Services has to offer. We finished off by adding text-to-speech capabilities to our application, by using the Bing Speech API.

The next chapter will go into more details of the Vision part of the APIs. There we will learn how to analyze images using the Computer Vision API. We will dive more into the Face API, and we will learn how to detect emotions in faces, using the Emotion API. Some of this will be used to start building our smart house application.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • • Explore the capabilities of all 21 APIs released as part of the Cognitive Services platform
  • • Build intelligent apps that combine the power of computer vision, speech recognition, and language processing
  • • Give your apps human-like cognitive intelligence with this hands-on guide

Description

Take your app development to the next level with Learning Microsoft Cognitive Services. Using Leif's knowledge of each of the powerful APIs, you'll learn how to create smarter apps with more human-like capabilities. ? Discover what each API has to offer and learn how to add it to your app ? Study each AI using theory and practical examples ? Learn current API best practices

Who is this book for?

Who is this book for? ? .NET developers with some programming experience ? Those who know how to do basic programming tasks and navigate in Visual Studio ? No prior knowledge of artificial intelligence or machine learning required

What you will learn

  • After an introduction to Microsoft Cognitive Services and what it has to offer, you?ll learn about each of the APIs in depth. This includes using vision APIs for image analysis, speech APIs for text-to-speech conversions, knowledge and search APIs for adding real-world intelligence to apps, and much, much more.

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 20, 2017
Length: 372 pages
Edition : 1st
Language : English
ISBN-13 : 9781786467843
Vendor :
Microsoft
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Mar 20, 2017
Length: 372 pages
Edition : 1st
Language : English
ISBN-13 : 9781786467843
Vendor :
Microsoft
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 111.97
Enterprise Application Architecture with .NET Core
€41.99
Learning Microsoft Cognitive Services
€36.99
Building Bots with Microsoft Bot Framework
€32.99
Total 111.97 Stars icon
Banner background image

Table of Contents

10 Chapters
1. Getting Started with Microsoft Cognitive Services Chevron down icon Chevron up icon
2. Analyzing Images to Recognize a Face Chevron down icon Chevron up icon
3. Analyzing Videos Chevron down icon Chevron up icon
4. Letting Applications Understand Commands Chevron down icon Chevron up icon
5. Speak with Your Application Chevron down icon Chevron up icon
6. Understanding Text Chevron down icon Chevron up icon
7. Extending Knowledge Based on Context Chevron down icon Chevron up icon
8. Querying Structured Data in a Natural Way Chevron down icon Chevron up icon
9. Adding Specialized Searches Chevron down icon Chevron up icon
10. Connecting the Pieces Chevron down icon Chevron up icon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.