Skip to main content

Check Durable Function Status when using event trigger


Working with Durable Functions inside Azure Functions, it’s a great experience and enable us to achieve things that were not possible to achieve until now. For patterns where you need fan out/fan in, durable functions made our life more simple but allowing us to run multiple functions in parallel at collecting the data at the end.
When we use HTTP(s) trigger, you can receive a clear status of your current execution over a webhook. A similar approach can be achieved if you monitor the status from another function at a specific time interval. But when you are using event triggers, the rules of the game are changing. 

Problem to solve
From a Web Application we want to trigger a Durable Function using Azure Service Bus and have the ability to check the function status to be able to know when the function had finished.
Matching the instance ID
The problem appears when you use Azure Service Bus as a trigger or any other type of event trigger. In these situations, the instance ID that it is necessary to check the current status of your function does not return to your call.
It happens because you decouple the two components using an event/message based system. This makes it impossible for Azure Functions to return the caller the instance ID. For this kind of situations, you need to create a mechanism that would allow you to discover the instance ID of your function.
Let’s assume that the trigger is a web application hosted inside Azure App Service, where we have the client session ID. The session ID can be used to do the match between our instance ID and our session ID. Inside the orchestrator of the Durable Function, we could write into external storage the combination between [session ID, instance ID] as a tuple. In this way, our web application would have the ability to discover the instance ID that can be used to check the current status of the execution. This key value pair can be stored anywhere from a simple Azure Table to Azure Redis Cache or Cosmos DB. I recommend using to store this data inside Azure Redis Cache because you can set an expiration time that can be used to clean the content automatically and it is cheaper than Cosmos DB.

HTTP endpoint
You should be aware that even if you don’t expose the HTTP trigger of the Durable Function, because you are using the Durable Function extension inside Azure Function, you still expose the HTTP endpoint of the orchestrator of the Durable Function that it is used to manage and control the execution. The status query API is all the time available for your function and can be used with success to check the status.
It means that even if you use the event trigger, there is an HTTP endpoint that can be used to check the current status of the durable function that is executing.

Check durable function status
There are multiple ways how you can check the function status. A straightforward way is to use Application Insights, nevertheless not all the time you are using Application Insights. Because of this, a straightforward solution for this context is to use the API exposed by the orchestrators. It exposed an HTTP API that can be used to check the function instance status.
The full documentation of the HTTP API and C# library can be found on the following links:
Another approach is to check the status of Azure Function inside Azure Storage, where Durable Function is keeping his internal status. The solution would work, but if the structure of the data stored inside Azure Storage changed, the solution would be invalid. Because we can not control the data structure, and there is no way to get a notification before it, I would recommend putting this as a risk for this solution.

Can I hide the HTTP API endpoint?
No directly. The HTTP API exposed by the Azure Function for monitoring and query is secured. You can access it only if you know the system key. The system key is an authorization key that is generated by the Azure Function host. It rants access to the Durable Task extension APIs and can be managed the same way as other authorization keys. The simplest way to discover the systemKey value is by using the CreateCheckStatusResponse API. If you want to find more you should check - https://docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-http-api#http-api-reference
This is not possible by default to hide the API, except the authorization part using the system key. You can use the Inbound IP restriction feature of Azure Functions to specify what are the IPs that are allowed to access your functions.
When you are using the App Service Plan or App Service Environment your functions are already inside the VNET and you can control the access to your resources. 

Final solution
Taking all into consideration, a possible solution that could be used is:
(1) Web App sends a message to Azure Service Bus as a trigger for Durable Function. The message contains a client session ID
(2) Durable Function is triggered by the message inside Azure Service Bus.
(3) The function at phase 3 writes inside Azure Redis Cache the [sessionID, instanceID].
(4) The Durable Function is running (in our case we are using Fan out/fain in the pattern)
(5) The Web App reads the instanceID of the durable function from Azure Redis Cache using the session ID
(6) The Web App checks the durable function status using
           C#  - await client.GetStatusAsync(instanceId); //internally 
           HTTP API - GET /runtime/webhooks/durabletask/instances/{instanceId} ….


Conclusion
You should take into account that the HTTP API exposed for managing the Azure Function it is all the time available, even when you don’t have the HTTP trigger. Checking the status using the internal representation of the services (e.g. Azure Storage used by Durable Function) it is not recommended – there is no control when the internal representation might change and how fast you can react to map the new structure.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine: threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration: TeamCity .NET 4.51 EF 6.0.2 VS2013 It see