Skip to main content

Workflows over Windows Azure

Nowadays, almost all enterprise applications have at least a workflow defined. Not only complex application need to contain workflow. Even a simple ecommerce application can have a workflow defined to manage the orders or the product stocks for example.
Supporting a workflow in our application can be made in two ways. The first approach is to search on the market what kinds of solutions are available and choose the most suitable for our project. Using this approach will offer a workflow mechanism, but in the same time can generate other costs through licensing and/or developing custom functionality. 
 The second approach is to start developing the workflow mechanism from scratch. This solution can be pretty tricky because there are a lot of problems that need to be resolved. Failover mechanism, rules definition, guaranty that each message from the workflow will not be lost and many more needs to be define and implement by our own.
All the data that are flying through the workflow will need to be persisted somewhere. There are different solutions that can be used, from relational databases to NoSQL or in-memory database. Any of this persistence method that will be used will consume resources of our infrastructure.
 Beside this, applying the rules that are defined in the workflow require a lot of computation power. Even simple rules can become a nightmare if you need to process 100.000, 200.000 or even 500.000 messages per hour.
One of the most important thing that is also required when we use a workflow mechanism is the availability. We don’t want to have an ecommerce application that cannot accept new orders because the workflow mechanism is down or is too busy with other orders. Even if we have a workflow mechanism that is very scalable, more instances will mean for us more resources and in the end more money.
Until now we saw different requests of a workflow mechanism. All this requests are translated for us in time, resources and money.
Windows Azure can help us when we need a mechanism for workflows. Windows Azure Service Bus offers us the possibility to define a workflow very easily. We can define rules, states and custom actions while the system is running.
First of all let’s find out what is Windows Azure Service Bus. It is a brokered messaging infrastructure that can deliver a message to more than one listener. Each listener needs to subscribe their interest to a specific topic. Messages can be added to the system through the topic. Once a message is added to the topic, the Windows Azure infrastructure will guarantee us that the message will be deliver to all subscribers.
The power of Windows Azure Service Bus related to workflows is the filtering mechanism that can be defined at subscription level. This means that each subscription can have attached one or more rules. These rules will be used by the subscription to accept only the messages that respect the given rules.
Figure 1: Workflow definition over Service Bus
The rules that can be defined can make different checks, from simple ones that compare strings (flags) to more complex ones. Using these rules we can define a workflow over one or more topics from Windows Azure Service Bus. Each state of our workflow can have a subscription assignee. This will guaranty that messages with a given state will be received only by a specific subscription. In this way we can have subscription that will process messages only with a given state.
From the scalability point of view, we can have more than one subscriber for each subscription. This means that messages with a give state can be processed in paralleled by multiple instances. A message from a subscription will be received by only one subscriber (listener).
A message can be consumed from the subscription in two ways – Peek and Lock or Receive and Delete. Using the first method, a message will be removed from the subscription only when the receiver will confirm that the message was processed with success. Otherwise the message will be available for consummation again. We have support for Death Letters, this means that we can mark a message as corrupted and it will be moved to a sub-topic that will contain messages marked with this flag. A nice feature related to Death Letters is the support to mark a message as death letter automatically when the number of retried reach a specific value.
Using Windows Azure give us the possibility to define a custom action that can be executed over the message in the moment when a message arrives in a subscription. For example we can add a new property to the message that represents the sum of other two properties. Using this feature we can very easily change the properties of an item when the state is changed.
If we have special cases when we can change the state of items from one state to another without custom actions, that we can use the forward feature of subscription. Windows Azure Service Bus gives us the possibility to forward a message to another topic automatically. In this way we don’t need to retrieve the message from the subscription and forward it to the topic.
Windows Azure Service Bus is a system that is very scalable, can support as many as 10.000 topics per each service namespace. Each topic can have maximum 2.000 subscriptions and 5.000 concurrent receive requests. This means that we can define on the same topic a workflow that has 2.000 states. Also, nothing stops us to define a workflow that uses more than one topic.
From the cost perspective, we will be charged with 0.01$ per 10.000 messages that are send or delivered by Windows Azure Service Bus. This means that we can send 1 million messages to service bus with only 1$. If you use this service from an application that is hosted in the same datacenter you will not be charged data traffic. Otherwise, the outbound traffic will be charged with a rate that starts with 0.15$ per GB.
Workflow Manager is the predecessor of Windows Workflow Foundation and was lunched at the end of last year. This started to support integration of workflows with Windows Azure Service Bus, offering a better support for reliability, asynchronous processing and coordination.
In conclusion we saw that defining a workflow mechanism using Windows Azure Service Bus can simplify our workflow mechanism. This service is available from any location of the world and is very scalable. With features like death letter, automatically message forwarding and the guaranty that messages are not lost, Windows Azure Service Bus is one of the best candidates when we need to use a workflow mechanism.

Comments

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(...

How to audit an Azure Cosmos DB

In this post, we will talk about how we can audit an Azure Cosmos DB database. Before jumping into the problem let us define the business requirement: As an Administrator I want to be able to audit all changes that were done to specific collection inside my Azure Cosmos DB. The requirement is simple, but can be a little tricky to implement fully. First of all when you are using Azure Cosmos DB or any other storage solution there are 99% odds that you’ll have more than one system that writes data to it. This means that you have or not have control on the systems that are doing any create/update/delete operations. Solution 1: Diagnostic Logs Cosmos DB allows us activate diagnostics logs and stream the output a storage account for achieving to other systems like Event Hub or Log Analytics. This would allow us to have information related to who, when, what, response code and how the access operation to our Cosmos DB was done. Beside this there is a field that specifies what was th...

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills)

Cloud Myths: Cloud is Cheaper (Pill 1 of 5 / Cloud Pills) The idea that moving to the cloud reduces the costs is a common misconception. The cloud infrastructure provides flexibility, scalability, and better CAPEX, but it does not guarantee lower costs without proper optimisation and management of the cloud services and infrastructure. Idle and unused resources, overprovisioning, oversize databases, and unnecessary data transfer can increase running costs. The regional pricing mode, multi-cloud complexity, and cost variety add extra complexity to the cost function. Cloud adoption without a cost governance strategy can result in unexpected expenses. Improper usage, combined with a pay-as-you-go model, can result in a nightmare for business stakeholders who cannot track and manage the monthly costs. Cloud-native services such as AI services, managed databases, and analytics platforms are powerful, provide out-of-the-shelve capabilities, and increase business agility and innovation. H...