Skip to main content

Service Bus Queues vs Windows Azure Queues

We saw until now how to work with Service Bus Queues, how we can handle some situations but why we should use Service Bus Queues and not Widows Azure Queue. Should we always use Service Bus Queues? If yes, why does Microsoft still have Windows Azure Queues. At these questions I will try to response in this blog post.
First of all let’s see some main difference between these two types of queues. The bathed mode is full only supported on Service Bus Queues using transactions. Windows Azure Queues support only for receiving messages and maximum 32 messages in a batch. In the same manner Service Bus Queues have support for "Receive ad Delete" mode, not only for "Peek and Lock" as Windows Azure Queues. "Peek and Lock" have a small downside. We need to make another request (another transaction) to remove the element from the queue - and this means additional costs. When we create a lock on a message for Windows Azure Queues the maxim duration can be 7 days in comparison with Service Bus Queues that accept as maxim value only 5 minutes. For Service Bus this configuration can be made on the queue level in comparison with Windows Azure Queues where the configuration can be made separate for each message from the queue. Any meta-data of a message can be changed in Windows Azure Queues. This is not supported on Service Bus Queues.

Service Bus Queues can guarantee that the order is the same (FIFO) and we don't receive messages in other order. On Windows Azure Queues the order can be corrupted when a lock on a messages expired and the message is available again. The concept of transactions can only be found on Service Bus Queues.
What is also missing from Windows Azure Queues is automatic dead lettering, WCF integration, duplicate detecting and the ability to group messaging (using session id). On the other side Service Bus Queues don't support in-place updates, purge content from a queue, built-in log support and storage metrics. From audit perspective I really miss the log support from Service Bus Queues. Both types of queues support schedule delivery, poison message support and message deferral.
From the perspective of supported protocols, in this moment Service Bus Queues have some limitation (don't support Node.js), but I expect this problem to be solved in the next update released. The performance of both types of queues is very similar - 2.000 messages per second can flow through a queue and the latency is small (10ms for Windows Azure Queues and 100s for Service Bus Queues). On the Servie Bus Queues the latency can create some problems if we want to consume a lot of messages in parallel, but we have some solutions here.
If we compare the size of a message that can be putted on a queue, Service Bus Queues lead with 256KB (and in the future will have 1MB or more). But the downside is the size of a queue that can have maximum 5GB (in this moment), where Windows Azure Queues don't have any limit. The same thing happens with the number of queues. For a given namespace we can have maximum 10.000 queues in a Service Bus, where Window Azure Queues don't have any limit. The TTL for a message can have a maximum value of 7 days for Windows Azure Queue where Service Buss Queues accept unlimited - I don't like this because in combination with the maximum size of a queue that is limited to 5GB we can have some problems.
What I enjoy on Service Bus Queues is the ways we can authenticate. Windows Azure Queues support only Symmetric key but Service Bus Queues supports ACS claims. Base on ACS claims we can have role-based access control and identify provider federation.
We can have different costs based on what type of queue we use. I will don't hang of numbers because this values can change What is important here is:
on Service Bus Queues we don't pay the storage cost
on Windows Azure Queues we pay the ACS token requests
This are the main differences between this two types of queues. I hope that I could highlight the main differences between this type of queues.
Until now we saw what are the main differences between these two types of queues. Both of them are queues, Microsoft cannot reinvent them but based on our needs we can use one or another.
In conclusion let’s see when we should use Service Bus Queues:
  • The queue is under 5GB
  • We want in the future to migrate to multi-subscribers (Service Bus Topics)
  • A message needs to be spited in more messages
  • A message can to be processed at-most-once
  • ACS claims and role-based authentication
  • WCF integration
  • TTL is more than 7 days
  • Duplicate detection for messages
  • FIFO order need to guaranteed
  • Transactions
And we should use Windows Azure Queues when:
  • We need log and audit over the queue
  • The total size of messages from the queue will exceed 5GB
  • We need to process a lot of messages in a short period of time
  • A message that is not full processes can be handling by other consumer very quick
We cannot say that Service Bus Queue is better than Windows Azure Queue. The best queue is the queue that is more suitable for our needs.

Comments

Popular posts from this blog

How to check in AngularJS if a service was register or not

There are cases when you need to check in a service or a controller was register in AngularJS.
For example a valid use case is when you have the same implementation running on multiple application. In this case, you may want to intercept the HTTP provider and add a custom step there. This step don’t needs to run on all the application, only in the one where the service exist and register.
A solution for this case would be to have a flag in the configuration that specify this. In the core you would have an IF that would check the value of this flag.
Another solution is to check if a specific service was register in AngularJS or not. If the service was register that you would execute your own logic.
To check if a service was register or not in AngularJS container you need to call the ‘has’ method of ‘inhector’. It will return TRUE if the service was register.
if ($injector.has('httpInterceptorService')) { $httpProvider.interceptors.push('httpInterceptorService&#…

ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded

Today blog post will be started with the following error when running DB tests on the CI machine:
threw exception: System.InvalidOperationException: The Entity Framework provider type 'System.Data.Entity.SqlServer.SqlProviderServices, EntityFramework.SqlServer' registered in the application config file for the ADO.NET provider with invariant name 'System.Data.SqlClient' could not be loaded. Make sure that the assembly-qualified name is used and that the assembly is available to the running application. See http://go.microsoft.com/fwlink/?LinkId=260882 for more information. at System.Data.Entity.Infrastructure.DependencyResolution.ProviderServicesFactory.GetInstance(String providerTypeName, String providerInvariantName) This error happened only on the Continuous Integration machine. On the devs machines, everything has fine. The classic problem – on my machine it’s working. The CI has the following configuration:

TeamCity.NET 4.51EF 6.0.2VS2013
It seems that there …

Run native .NET application in Docker (.NET Framework 4.6.2)

Scope
The main scope of this post is to see how we can run a legacy application written in .NET Framework in Docker.

Context
First of all, let’s define what is a legacy application in our context. By a legacy application we understand an application that runs .NET Framework 3.5 or higher in a production environment where we don’t have any more the people or documentation that would help us to understand what is happening behind the scene.
In this scenarios, you might want to migrate the current solution from a standard environment to Docker. There are many advantages for such a migration, like:

Continuous DeploymentTestingIsolationSecurity at container levelVersioning ControlEnvironment Standardization
Until now, we didn’t had the possibility to run a .NET application in Docker. With .NET Core, there was support for .NET Core in Docker, but migration from a full .NET framework to .NET Core can be costly and even impossible. Not only because of lack of features, but also because once you…