Skip to main content

Workflows over Windows Azure

Nowadays, almost all enterprise applications have at least a workflow defined. Not only complex application need to contain workflow. Even a simple ecommerce application can have a workflow defined to manage the orders or the product stocks for example.
Supporting a workflow in our application can be made in two ways. The first approach is to search on the market what kinds of solutions are available and choose the most suitable for our project. Using this approach will offer a workflow mechanism, but in the same time can generate other costs through licensing and/or developing custom functionality. 
 The second approach is to start developing the workflow mechanism from scratch. This solution can be pretty tricky because there are a lot of problems that need to be resolved. Failover mechanism, rules definition, guaranty that each message from the workflow will not be lost and many more needs to be define and implement by our own.
All the data that are flying through the workflow will need to be persisted somewhere. There are different solutions that can be used, from relational databases to NoSQL or in-memory database. Any of this persistence method that will be used will consume resources of our infrastructure.
 Beside this, applying the rules that are defined in the workflow require a lot of computation power. Even simple rules can become a nightmare if you need to process 100.000, 200.000 or even 500.000 messages per hour.
One of the most important thing that is also required when we use a workflow mechanism is the availability. We don’t want to have an ecommerce application that cannot accept new orders because the workflow mechanism is down or is too busy with other orders. Even if we have a workflow mechanism that is very scalable, more instances will mean for us more resources and in the end more money.
Until now we saw different requests of a workflow mechanism. All this requests are translated for us in time, resources and money.
Windows Azure can help us when we need a mechanism for workflows. Windows Azure Service Bus offers us the possibility to define a workflow very easily. We can define rules, states and custom actions while the system is running.
First of all let’s find out what is Windows Azure Service Bus. It is a brokered messaging infrastructure that can deliver a message to more than one listener. Each listener needs to subscribe their interest to a specific topic. Messages can be added to the system through the topic. Once a message is added to the topic, the Windows Azure infrastructure will guarantee us that the message will be deliver to all subscribers.
The power of Windows Azure Service Bus related to workflows is the filtering mechanism that can be defined at subscription level. This means that each subscription can have attached one or more rules. These rules will be used by the subscription to accept only the messages that respect the given rules.
Figure 1: Workflow definition over Service Bus
The rules that can be defined can make different checks, from simple ones that compare strings (flags) to more complex ones. Using these rules we can define a workflow over one or more topics from Windows Azure Service Bus. Each state of our workflow can have a subscription assignee. This will guaranty that messages with a given state will be received only by a specific subscription. In this way we can have subscription that will process messages only with a given state.
From the scalability point of view, we can have more than one subscriber for each subscription. This means that messages with a give state can be processed in paralleled by multiple instances. A message from a subscription will be received by only one subscriber (listener).
A message can be consumed from the subscription in two ways – Peek and Lock or Receive and Delete. Using the first method, a message will be removed from the subscription only when the receiver will confirm that the message was processed with success. Otherwise the message will be available for consummation again. We have support for Death Letters, this means that we can mark a message as corrupted and it will be moved to a sub-topic that will contain messages marked with this flag. A nice feature related to Death Letters is the support to mark a message as death letter automatically when the number of retried reach a specific value.
Using Windows Azure give us the possibility to define a custom action that can be executed over the message in the moment when a message arrives in a subscription. For example we can add a new property to the message that represents the sum of other two properties. Using this feature we can very easily change the properties of an item when the state is changed.
If we have special cases when we can change the state of items from one state to another without custom actions, that we can use the forward feature of subscription. Windows Azure Service Bus gives us the possibility to forward a message to another topic automatically. In this way we don’t need to retrieve the message from the subscription and forward it to the topic.
Windows Azure Service Bus is a system that is very scalable, can support as many as 10.000 topics per each service namespace. Each topic can have maximum 2.000 subscriptions and 5.000 concurrent receive requests. This means that we can define on the same topic a workflow that has 2.000 states. Also, nothing stops us to define a workflow that uses more than one topic.
From the cost perspective, we will be charged with 0.01$ per 10.000 messages that are send or delivered by Windows Azure Service Bus. This means that we can send 1 million messages to service bus with only 1$. If you use this service from an application that is hosted in the same datacenter you will not be charged data traffic. Otherwise, the outbound traffic will be charged with a rate that starts with 0.15$ per GB.
Workflow Manager is the predecessor of Windows Workflow Foundation and was lunched at the end of last year. This started to support integration of workflows with Windows Azure Service Bus, offering a better support for reliability, asynchronous processing and coordination.
In conclusion we saw that defining a workflow mechanism using Windows Azure Service Bus can simplify our workflow mechanism. This service is available from any location of the world and is very scalable. With features like death letter, automatically message forwarding and the guaranty that messages are not lost, Windows Azure Service Bus is one of the best candidates when we need to use a workflow mechanism.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP