We recently had a higher education prospect ask us if we could use PowerApps to facially recognize students as well as retrieve information on them as they enter an exam room. Being aware of the new Cognitive Services in Azure and having some experience in creating PowerApps I responded with, well yes of course we can. Later that I week I decided to take my response to task and attempt to create a PowerApp which would capture an anonymous image using the camera control, then use FaceAPI from Azure Cognitive Services to validate the image against all images stored against my Dynamics 365 contacts, and finally respond with the contacts first name, last name and student number if a match was found.

This post will show what D365 PowerApps are capable of when combined with Azure technologies and Dynamics 365.

What is Azure Cognitive Services?

Before we start, Microsoft released a set of API’s in Azure which implement machine learning algorithms enabling natural and contextual interaction with your applications. These include:

  • Language
  • Speech
  • Vision
  • Search
  • Knowledge

We are going to focus on Vision and more specifically FaceAPI. FaceAPI uses image processing algorithms through several API’s which provide face verification, face search, face grouping and face identification.

We want to utilize the functionality FaceAPI offers to iterate a set of stored images and validate a given anonymous image against them, in this case, the image captured by the PowerApp against the CRM contact images.

At the time of building this demo, FaceAPI was in preview and therefore had a few limitations and issues in terms of the frequency you could call the API.

There are several ways to achieve facial recognition using the FaceAPI methods but I have chosen to keep it as simple as possible by using the Verify method and simply providing the method with the images to verify against. Alternatively, one could build a Person Group and individually add each Dynamics 365 contact to the group as a Person with an associated list of faces.

More information and API documentation on FaceAPI can be found at: https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236

Solution Overview

To deliver this solution we need five components to work in harmony:

Azure Storage Account

We want to use an Azure Storage Account as a blob container for the Dynamics 365 contact images as well as the anonymous images submitted by the PowerApp. The container stores and provides an absolute url to the images which is needed in the FaceAPI methods.

Azure App Service

A WebAPI service will be responsible for the uploading of the anonymous image to the Storage Blob as well as using FaceAPI to verify the images and querying Dynamics 365 contact data.


We will be using the FaceAPI methods to detect the faces in our images as well as verify an anonymous image against a list of stored images.

D365 PowerApps

We will build a simple PowerApp to act as our user interface in capturing an image and surfacing Dynamics 365 data.

Dynamics 365

Dynamics 365 is acting as our data store for contacts

Azure Storage Account

We want to use an Azure Storage Account to serve as the container for the Dynamics 365 contact images and our anonymous images uploaded when captured in the PowerApp.

I created two containers within the blob service named:

  • entityimages
  • powerappimages


Entityimages will contain all the images stored in Dynamics 365 on the contact records. I have chosen to populate the blob via a simple console application. Of course, in the real world you may want to consider rather using a plugin on the contact to maintain these images in the blob or possible an Azure Function App which can be scheduled to run at a specific interval.

I am naming the image files by the contact id and adding some metadata to the images when uploading these images to the blob.

Powerappimages will contain the anonymous images captured by the PowerApp camera control. Initially my intension was to use Microsoft Flow to simply upload the captured image to the Azure Blob, however after much tinkering I found it was not possible out of the box. Because of this limitation with Flow I have opted to create and use a WebApi web service which will be responsible for uploading the image to the Azure Blob and executing the FaceAPI logic. More on that below.

Console Application Code

using Microsoft.Crm.Sdk.Messages;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Tooling.Connector;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Linq;
using System.ServiceModel;
using System.Text;
using Microsoft.Azure;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using System.IO;
using Microsoft.ProjectOxford.Face;
using Microsoft.ProjectOxford.Face.Contract;
using System.Threading.Tasks;

namespace CRMAzureBlob
    class Program
        private static CrmServiceClient _client;

        public static void Main(string[] args)
                //Create D365 and Storage connections
                _client = new CrmServiceClient(ConfigurationManager.ConnectionStrings["CRMConnectionString"].ConnectionString);
                CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
                CloudBlobClient blobClient = null;
                CloudBlobContainer container = null;
                //Fetch contacts with entity images
                var fetchXml = @"<fetch mapping='logical' >
                                  <entity name='contact' >
                                    <attribute name='fullname' />
                                    <attribute name='entityimage' />
                                      <condition attribute='entityimage_url' operator='not-null' />
                //Execute fetch
                var results = _client.RetrieveMultiple(new FetchExpression(fetchXml));
                Console.WriteLine($"Uploading {results.Entities.Count} Images from D365..");
                //Iterate the contacts and add them the blob
                foreach (var c in results.Entities)
                    var contactId = c["contactid"];
                    var image = c["entityimage"];
                    var fullName = c["fullname"];
                    if (blobClient == null)
                        blobClient = storageAccount.CreateCloudBlobClient();
                        container = blobClient.GetContainerReference("entityimages");
                    var blockBlob = container.GetBlockBlobReference($"{ contactId.ToString() }.jpg");
                    blockBlob.Metadata.Add("ContactId", contactId.ToString());
                    blockBlob.Metadata.Add("FullName", (string)fullName);

                    var stream = new MemoryStream((byte[])image);
                Console.WriteLine("Upload Complete");
            catch (FaultException<OrganizationServiceFault> ex)
                string message = ex.Message;

        public static void ClearBlob(CloudBlobContainer container)
            // Loop over items within the container and output the length and URI.
            foreach (IListBlobItem item in container.ListBlobs(null, true))
                CloudBlockBlob blob = (CloudBlockBlob)item;


Azure App Service

As mentioned, PowerApps and Flow do not currently offer any OOB functionality to upload a captured image to a specific Azure Storage Blob, therefore we need to create a custom WebApi service hosted as an Azure App Service which will do this work for us.

First we need to create a Cognitive Services resource in Microsoft Azure.  You can simply search on “Cognitive” to quickly find the Cognitive Services resource within Azure.

Create a new CS resource and make sure you use Face API as the API Type

Once created take note of the FaceAPI Resource Key which will be used when connecting to the service going forward.

To simplify things, I am going to create the WebApi solution based on the solution in this blog post by Microsoft -> https://powerapps.microsoft.com/en-us/blog/custom-api-for-image-upload/ This will cover the creation of the WebApi service, publishing of the app to Azure and finally the registration of the custom service in PowerApps!

I will be making changes to the UploadImage method to include the FaceAPI functionality and the Dynamics 365 contact lookup on successful facial recognition. I will also add to the UploadedFileInfo class to include FirstName, LastName, StudentID and Confidence properties which will be used in the PowerApp form.

To implement these changes, you will first need to add the NuGet packages for Dynamics 365 and FaceAPI to the solution mentioned above:

  • Microsoft.ProjectOxford.Face
  • Microsoft.CrmSdk.CoreAssemblies
  • Microsoft.CrmSdk.Deployment
  • Microsoft.CrmSdk.Workflow
  • Microsoft.CrmSdk.XrmTooling.CoreAssembly

For us to facially verify a person in an image we need to first detect the face within the image. We do this by calling the DetectAsync method which will return a list of detected faces with their attributes. We will perform a Detect on both the anonymous image as well each image stored in the entityimages blob from Dynamics 365.  Once we have detected the face in the anonymous image we then Verify this face against the face in the Dynamics 365 contact image by calling the VerifyAsync method and parsing the two face objects for comparison.

The VerifyAsync method will return a result with IsIdentical and Confidence properties telling us if the faces were a match. Once we validate if the faces are a match we can strip the contact id from the filename (or alternatively use the metadata on the blob object) to retrieve the contract entity from Dynamics 365.

The following changes have been made to the UploadImage method

public async Task<IHttpActionResult> UploadImage(string fileName = "")
    //Use a GUID in case the fileName is not specified
    if (fileName == "")
        fileName = Guid.NewGuid().ToString();
    //Check if submitted content is of MIME Multi Part Content with Form-data in it?
    if (!Request.Content.IsMimeMultipartContent("form-data"))
        return BadRequest("Could not find file to upload");

    //Read the content in a InMemory Muli-Part Form Data format
    var provider = await Request.Content.ReadAsMultipartAsync(new InMemoryMultipartFormDataStreamProvider());

    //Get the first file
    var files = provider.Files;
    var uploadedFile = files[0];

    //Extract the file extention
    var extension = ExtractExtension(uploadedFile);
    //Get the file's content type
    var contentType = uploadedFile.Headers.ContentType.ToString();

    //create the full name of the image with the fileName and extension
    var imageName = string.Concat(fileName, extension);

    //Initialise Blob and FaceApi connections
    var storageAccount = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]); //Azure storage account connection
    var _faceServiceClient = new FaceServiceClient(ConfigurationManager.AppSettings["FaceAPIKey"]);  //FaceApi connection
    var blobClient = storageAccount.CreateCloudBlobClient();
    var anonContainer = blobClient.GetContainerReference("powerappimages"); //camera control images
    var d365Container = blobClient.GetContainerReference("entityimages"); //dynamics 365 contact images
    var contactId = Guid.Empty;
    double confidence = 0;
    Entity crmContact = null;

    var blockBlob = anonContainer.GetBlockBlobReference(imageName);
    blockBlob.Properties.ContentType = contentType;

    //Upload anonymous image from camera control to powerappimages blob
    using (var fileStream = await uploadedFile.ReadAsStreamAsync()) //as Stream is IDisposable

    //Detect faces in the uploaded anonymous image
    Face[] anonymousfaces = await _faceServiceClient.DetectAsync(blockBlob.Uri.ToString(), returnFaceId: true, returnFaceLandmarks: true);

    //Iterate stored contact entity images and verify the identity
    foreach (IListBlobItem item in d365Container.ListBlobs(null, true))
        CloudBlockBlob blob = (CloudBlockBlob)item;
        Face[] contactfaces = await _faceServiceClient.DetectAsync(blob.Uri.ToString(), returnFaceId: true, returnFaceLandmarks: true);
        VerifyResult result = await _faceServiceClient.VerifyAsync(anonymousfaces[0].FaceId, contactfaces[0].FaceId);
        if (result.IsIdentical)
            //Face identified. Retrieve associated contact
            MatchCollection mc = Regex.Matches(blob.Uri.ToString(),
                @"([a-z0-9]{8}[-][a-z0-9]{4}[-][a-z0-9]{4}[-][a-z0-9]{4}[-][a-z0-9]{12})"); //strip contact id from image filename
            contactId = Guid.Parse(mc[0].ToString());
            confidence = Math.Round((result.Confidence * 100), 2);
            crmContact = GetContact(contactId);
    var fileInfo = new UploadedFileInfo
        FileName = fileName,
        FileExtension = extension,
        ContentType = contentType,
        FileURL = blockBlob.Uri.ToString(),
        ContactId = contactId.ToString(),
        Confidence = confidence.ToString(),
        FirstName = crmContact?.GetAttributeValue<string>("firstname"),
        LastName = crmContact?.GetAttributeValue<string>("lastname"),
        StudentID = crmContact?.GetAttributeValue<string>("sms_studentid"),
    return Ok(fileInfo);



public static Entity GetContact(Guid id)
    using (var _crmClient = new CrmServiceClient(ConfigurationManager.AppSettings["CRMConnectionString"]))
        ColumnSet cols = new ColumnSet(new string[] { "firstname", "lastname", "sms_studentid"});
        var result = _crmClient.Retrieve("contact", id, cols);
        return result;

D365 PowerApp

Next I have created a basic PowerApp consisting of a camera control, toggle button and four text fields to surface the student data.

I will go through each control and the configuration needed to make the solution work.

Camera Control

The camera control contains an event named OnSelect available for use to write some code against, and this is exactly where we will plug our WebApi web service into.

If you followed the MS blog correctly (https://powerapps.microsoft.com/en-us/blog/custom-api-for-image-upload/), you should have a data source available named ImageUploadAPI which represents the WebApi service we created and published to Azure. This data source allows us to reference the service methods UploadImage and its return object UploadedFileInfo.

So, by clicking on the camera control, selecting the Action tab and clicking On select, we can add code to the event much like you would in an Excel formula.

UpdateContext({ ResultValue : ImageUploadAPI.UploadImage(Camera1.Photo) });

The UpdateContext function is used to create a context variable, which temporarily holds a piece of information, in our case the object returned by the WebApi service. The ResultValue variable acts as a datasource once created and can be bound to any control.

*We also have the capability of stringing multiple methods on after another separated by a semi-colon. The PowerApp will execute these methods sequentially. In my demo, I strung in a Microsoft Flow after the UploadImage method which would accept the ContactID and within the Flow logic would retrieve the contact from Dynamics 365 and fire of an email to them notifying them they have just been successfully verified.

Toggle Button

The toggle button simply allows the user to switch between the front and rear camera’s on a mobile device and the configuration is really straight forward.

Click on the camera control and within the right-hand side properties pane, click the Advanced tab.

In the design section add Toggle1.Value in the Camera textbox.

Text Boxes

Finally, we have four text boxes representing First Name, Last Name, Student ID and Confidence. Earlier on the camera control we explicitly created the variable ResultValue which will be populated by our WebApi service when uploading and verifying the captured image. All that is left to do is to set these textbox values to the ResultValue object and relevant property.

ResultValue represents the UploadedFileInfo class in the WebApi service and we will be using the FirstName, LastName, StudentID and Confidence properties to populate the textbox values. We also want to validate the ContactID properties contains a valid contact GUID in the case a match was verified or is an empty GUID in the case no match was found.

Click on the textbox and within the right-hand side properties pane click Advanced, and set the Text properties to the following values:

First Name field:

If(ResultValue.ContactId = “00000000-0000-0000-0000-000000000000″,”Not Found”, ResultValue.FirstName)

Last Name field:

If(ResultValue.ContactId = “00000000-0000-0000-0000-000000000000″,”Not Found”, ResultValue.LastName)

Student ID field:

If(ResultValue.ContactId = “00000000-0000-0000-0000-000000000000″,”Not Found”, ResultValue.StudentID)

Confidence field:

ResultValue.Confidence & “%”

The Final Result

Ensure you have set a few contact images on your contacts in Dynamics 365

Make sure you run the console application code to upload all Dynamics 365 contact images to the Azure Storage Blob.

Install the PowerApps app on your mobile device of choice or simply use the PowerApps designer to test the PowerApp. I have installed PowerApps on my iPhone 7 and have signed in using my Dynamics 365 credentials.

Now you can simply capture an image of the contact by tapping the camera control or take a picture of yourself if you have uploaded a selfie to one of your contacts. The PowerApp will query the WebApi service and finally return the result of the verification.

Happy Coding!

Please follow and like us:

One thought on “Facial Recognition using Dynamics 365, PowerApps and Azure Cognitive Services

  1. Hi,
    great article. When registering the Custom Connector in PowerApps, I get an error stating “Failed to download original Swagger file of the custom connector.”

    any idea?

Leave a Reply

Your email address will not be published. Required fields are marked *